Open assessment: Difference between revisions

From Opasnet
Jump to navigation Jump to search
(further away but still not there)
(first full version finalised)
Line 2: Line 2:
[[Category:Glossary term]]
[[Category:Glossary term]]
{{encyclopedia|moderator=Jouni}}
{{encyclopedia|moderator=Jouni}}
''For a brief description about [[open assessment]] and the related workspace, see [[Opasnet]].
''For a description about a web-workspace related to open assessment, see [[Opasnet]].


<section begin=glossary />
<section begin=glossary />
Line 9: Line 9:
<section end=glossary />
<section end=glossary />


[[Open assessment]] can also refer to the actual making of such an assessment (precisely: open assessment process), or the end product of the process (precisely: open assessment product or report). Usually, the use of the term open assessment is clear, but if there is a danger of confusion, the precise term (open assessment method, process, or product) should be used.  
Open assessment can also refer to the actual making of such an assessment (precisely: open assessment process), or the end product of the process (precisely: open assessment product or report). Usually, the use of the term open assessment is clear, but if there is a danger of confusion, the precise term (open assessment method, process, or product) should be used.  


==Open assessment as a methodology==
==Open assessment as a methodology==


Open assessment is built on several different [[method]]s and principles that together make a coherent system for collecting, organising, synthesising, and using information. These [[method]]s and principles are briefly summarised here. A more detailed [[rationale]] about why exactly these [[method]]s are used and needed can be found from [[Open assessment method]]. In addition, each [[method]] or principle has a page of its own in [[Opasnet]].
Open assessment is built on several different [[method]]s and principles that together make a coherent system for collecting, organising, synthesising, and using information. These [[method]]s and principles are briefly summarised here. A more detailed [[rationale]] about why exactly these [[method]]s are used and needed can be found from [[Open assessment method]]. In addition, each [[method]] or principle has a page of its own in [[Opasnet]].
===Purpose===


[[image:Assessment example.png|thumb|An example of an [[assessment]].]]
[[image:Assessment example.png|thumb|An example of an [[assessment]].]]
The basic idea of [[open assessment]] is to collect information that is needed in a decision-making process. The information is organised as an [[assessment]] that predicts the impacts of different decision options on some outcomes of interest. An assessment is typically a quantitative model about relevant issues causally affected by the decision and affecting the outcomes. Decisions, outcomes, and other issues are modelled as separate parts of an assessment, called [[variable]]s. In practice, [[assessment]]s and [[variable]]s are web pages in [[Opasnet]], a web-workspace dedicated for making such assessments. Such a web page contains all information (text, numerical values, and software code) needed to describe and actually run that part of an assessment model.
The basic idea of open assessment is to collect information that is needed in a decision-making process. The information is organised as an [[assessment]] that predicts the impacts of different decision options on some outcomes of interest. Information is organised to the level of detail that is necessary to achieve the use purpose of informing decision-makers. An assessment is typically a quantitative model about relevant issues causally affected by the decision and affecting the outcomes. Decisions, outcomes, and other issues are modelled as separate parts of an assessment, called [[variable]]s. In practice, [[assessment]]s and [[variable]]s are web pages in [[Opasnet]], a web-workspace dedicated for making such assessments. Such a web page contains all information (text, numerical values, and software code) needed to describe and actually run that part of an assessment model.


These web pages are also called '''[[information object]]s''', because they are the standard way of handling information as chunk-size pieces in [[open assessment]]s. Each object (or page) contains information about a particular issue. Each page also has the same, [[attribute|universal structure]]: a '''research question''' (what is the issue?), '''rationale''' (what do we know about the issue?), and '''result''' (what is our current best answer to the research question?). The descriptions of these issues are built on a web page, and anyone can participate in reading or writing just like in [[Wikipedia]]. Notably, the outcome is owned by everyone and therefore the original authors or assessors do not possess any copyrights or rights to prevent further editing.
===Basic concepts===


[[Trialogue]] is the word used about such Wikipedia-like contributing. The [[trialogue]] concept emphasises that in addition to having a dialogue or discussion, a major part of the communication and learning between the individuals in a [[group]] happens via [[information object]]s, in this case [[Opasnet]] pages. In other words, people not only talk or read about a topic but they actually contribute to an information object that represents the shared understanding of the group. [[Wikipedia]] is a famous example of [[trialogue|trialogical approach]] although the wikipedists do not use this word.
These web pages are also called '''[[information object]]s''', because they are the standard way of handling information as chunk-size pieces in open assessments. Each object (or page) contains information about a particular issue. Each page also has the same, [[attribute|universal structure]]: a '''research question''' (what is the issue?), '''rationale''' (what do we know about the issue?), and '''result''' (what is our current best answer to the research question?). The descriptions of these issues are built on a web page, and anyone can participate in reading or writing just like in [[Wikipedia]]. Notably, the outcome is owned by everyone and therefore the original authors or assessors do not possess any copyrights or rights to prevent further editing.
[[image:Variable example.png|thumb|An example of a [[variable]].]]


The key concepts in [[open assessment]] that are not typical in other assessment methods are the explicit roles of [[group]]s and information use purpose. '''[[Group]]s''' are crucial because everything is (implicitly) transformed into questions with this format: "What can we as a [[group]] know about issue X?" The group considering a particular issue may be explicitly described, but it may also be implicit. In the latter case, it typically means ''anyone who wants to participate'', or alternatively, ''the whole mankind''. The '''use purpose of information''' is crucial because that is the fuel of assessments. Nothing is done just for fun (although that is a valid motivation as well) but because the information is needed for some practical, explicit use. Of course, also other assessments are done to inform decisions, but [[open assessment]]s are continuously evaluated against the use purpose; this is used to guide the assessment work, and the assessment is finished as soon as the use purpose is fulfilled.
The structure of information objects is '''like [[:en:fractal|fractal]]''': an object with a research question may contain sub-questions that could be treated as separate objects themselves, and a discussion about a topic could be divided into several smaller discussions about sub-topics. For example, there is a [[variable]] called [[Population of Europe]] with the [[result]] [[index]]ed by country. Instead, this information could have been divided into several smaller population variables, one for each country. Indeed, there is also a variable called [[Population of Finland]]. How information is divided or aggregated into variables is a matter of taste and practicability and there are no objective rules for that. Instead, the rules only state that if there are two overlapping variables, the information in them must be coherent. In theory, there is no limit to how detailed a scope of an [[information object]] can be.


[[Open assessment]] attempts to be a coherent methodology. Everything in the methodology, as well as in all [[open assessment]]s is accepted or rejected based on observations and reasoning. However, there are a few things that cannot be verified using observations, and these are called [[axioms of open assessment]]. The six axioms are the following: 1) The reality exists. 2) The reality is a continuum without e.g. sudden appearances or disappearances of things without reason. 3) I can reason. 4) I can observe and use my observations and reasoning to learn about the reality. 5) Individuals (like me) can communicate and share information about the reality. 6) Not everyone is a systematic liar.
'''[[Trialogue]]''' is the word used about Wikipedia-like contributing. The [[trialogue]] concept emphasises that in addition to having a dialogue or discussion, a major part of the communication and learning between the individuals in a [[group]] happens via [[information object]]s, in this case [[Opasnet]] pages. In other words, people not only talk or read about a topic but they actually contribute to an information object that represents the shared understanding of the group. [[Wikipedia]] is a famous example of [[trialogue|trialogical approach]] although the wikipedists do not use this word.
[[image:Variable example.png|thumb|An example of a [[variable]].]]


'''[[Inference rules]]''' are used to decide what to believe. The rules are summarised here. 1) Anyone can promote a [[statement]] about anything (''promote'' = claim that the [[statement]] is true). 2) A promoted [[statement]] is considered valid unless it is invalidated (i.e., convincingly shown not to be true). 3) Uncertainty about whether a statement is true is measured with [[subjective probability|subjective probabilities]]. 4) The validity of a [[statement]] is always conditional to a particular [[group]] of people. 5) A [[group]] can develop other rules than these inference rules (such as mathematics or laws of physics) for deciding what to believe. 6) If two people within a group promote conflicting statements, the ''a priori'' belief is that each statement is equally likely to be true. 7) ''A priori'' beliefs are updated into ''a posteriori'' beliefs based on observations and [[open criticism]] that is based on shared rules. In practice, this means the use of [[scientific method]].
'''[[Group]]s''' are crucial in open assessment thinking because all research questions are (implicitly) transformed into questions with this format: "What can we as a [[group]] know about issue X?" The group considering a particular issue may be explicitly described, but it may also be implicit. In the latter case, it typically means ''anyone who wants to participate'', or alternatively, ''the whole mankind''.  


'''[[Benefit-risk assessment of food supplements#Result|Tiers of open assessment process]]''' describe typical phases of work when an [[open assessment]] is performed. The tiers are the following: Tier I: Definition of the use purpose and scope of an assessment. Tier II: Definition of the decision criteria. Tier III: Information production. It is noteworthy that the three tiers closely resemble the first three phases of [[IEHIA]], but the fourth phase (appraisal) is not a separate tier in [[open assessment]]. Instead, appraisal and information use happens at all tiers as a continuous and iterative process. In addition, the tiers have some similarities also to [[BRAFO]] approach.
The '''use purpose of information''' is crucial because that is the fuel of assessments. Nothing is done just for fun (although that is a valid motivation as well) but because the information is needed for some practical, explicit use. Of course, also other assessments are done to inform decisions, but open assessments are continuously being evaluated against the use purpose; this is used to guide the assessment work, and the assessment is finished as soon as the use purpose is fulfilled.


[[Open assessment]]s contain two kinds of items (or [[variable]]s): [[fact]]s (what is?) and '''[[moral norm]]s''' (what should be?). [[Statement]]s about [[moral norm]]s are developed using the '''[[morality game]]'''. The most important rules of the [[morality game]] are the following: 1) Observations are used as the starting point when evaluating the validity of facts. 2) Opinions are used as the starting point when evaluating the validity of moral norms. 3) In addition to the starting points, the validity of facts and moral norms is evaluated using [[open criticism]] and [[scientific method]]. 4) Facts must be coherent with each other everywhere. 5) Moral norms must be coherent with each other within a particular [[group]]. 6) Moral norms may be conflicting between two [[group]]s unless they share members to which conflicting norms apply.
Open assessment attempts to be a coherent methodology. Everything in the open assessment methodology, as well as in all open assessment processes, is accepted or rejected based on observations and reasoning. However, there are a few things that cannot be verified using observations, and these are called [[axioms of open assessment]]. The six axioms are the following: 1) The reality exists. 2) The reality is a continuum without e.g. sudden appearances or disappearances of things without reason. 3) I can reason. 4) I can observe and use my observations and reasoning to learn about the reality. 5) Individuals (like me) can communicate and share information about the reality. 6) Not everyone is a systematic liar.


It is clear that within a self-organised group, not all people agree on all facts and moral norms. Disputes are resolved using structured '''[[discussion]]s'''. In straightforward cases, discussions can be informal, but with more complicated or heated situations, particular discussion rules are followed. 1) Each discussion has one or more statements whose validity is the topic of the discussion. 2) A statement is valid unless it is attacked with a valid argument. 3) Statements can be defended or attacked with [[argument]]s, which are themselves treated as statements of smaller discussions. Thus, a hierarchical structure of defending and attacking arguments is created. 4) When the discussion is resolved, the content of ''all'' valid statements is incorporated into the [[information object]]. All resolutions are temporary, and anyone can reopen a discussion.
===Basic procedures===


Most variables have numerical values as their [[result]]s. Often these are uncertain and they are expressed as probability distributions. A web page is an impractical place to store and handle this kind of information. For this purpose, a database called [[Opasnet Base]] is used. It is a very flexible storage, and almost any results that can be expressed as two-dimensional tables can be stored in [[Opasnet Base]]. Results of a [[variable]] can be retrieved from the respective [[Opasnet]] page. [[Opasnet]] can be used to upload new results into the database. And finally, if one [[variable]] B is causally dependent on [[variable]] A, the result of A can be automatically retrieved from [[Opasnet Base]] and used in the [[formula]] for calculating B.  
'''[[Inference rules]]''' are used to decide what to believe. The rules are summarised here. 1) Anyone can promote a [[statement]] about anything (''promote'' = claim that the [[statement]] is true). 2) A promoted [[statement]] is considered valid unless it is invalidated (i.e., convincingly shown not to be true). 3) Uncertainty about whether a statement is true is measured with [[subjective probability|subjective probabilities]]. 4) The validity of a [[statement]] is always conditional to a particular [[group]] of people. 5) A [[group]] can develop other rules than these inference rules (mathematics and laws of physics are examples of highly accepted rules, but also disputed rules can be used) for deciding what to believe. 6) If two people within a group promote conflicting statements, the ''a priori'' belief is that each statement is equally likely to be true. 7) ''A priori'' beliefs are updated into ''a posteriori'' beliefs based on observations and [[open criticism]] that is based on shared rules. In practice, this means the use of [[scientific method]].


Because [[Opasnet Base]] contains samples of distributions of variables, it is actually one huge [[Bayesian belief network]], which can be used for assessment-level analyses and optimisation of different decision options.  
'''[[Benefit-risk assessment of food supplements#Result|Tiers of open assessment process]]''' describe typical phases of work when an open assessment is performed. The tiers are the following: Tier I: Definition of the use purpose and scope of an assessment. Tier II: Definition of the decision criteria. Tier III: Information production. It is noteworthy that the three tiers closely resemble the first three phases of [[IEHIA]], but the fourth phase (appraisal) is not a separate tier in open assessment. Instead, appraisal and information use happens at all tiers as a continuous and iterative process. In addition, the tiers have some similarities also to [[BRAFO]] approach, although the tiers in these approaches are not the same.


[[Bayesian inference]] [[Value of information]]
Open assessments contain two kinds of [[statement]]s: '''scientific (what is?) and moral (what should be?) statements'''. Moral [[statement]]s are developed using the '''[[morality game]]'''. The most important rules of the [[morality game]] are the following: 1) Observations are used as the starting point when evaluating the validity of scientific statements. 2) Opinions are used as the starting point when evaluating the validity of moral statements. 3) In addition to the starting points, the validity of statements is evaluated using [[open criticism]] and [[scientific method]]. 4) Scientific statements must be coherent with each other everywhere. 5) Moral statements must be coherent with each other when applied within a particular [[group]]. 6) Moral statements may be conflicting between two [[group]]s unless they share members to which conflicting norms apply.


[[Roles, tasks, and functionalities in Opasnet]]
It is clear that within a self-organised group, not all people agree on all scientific or moral statements. The good news is that it is neither expected nor hoped for. There are strong but simple rules to resolve disputes, namely rules of structured '''[[discussion]]s'''. In straightforward cases, discussions can be informal, but with more complicated or heated situations, the discussion rules are followed. 1) Each discussion has one or more [[statement]]s as a starting point. The validity of the [[statement]]s is the topic of the discussion. 2) A [[statement]] is valid unless it is attacked with a valid [[argument]]. 3) Statements can be defended or attacked with [[argument]]s, which are themselves treated as statements of smaller discussions. Thus, a hierarchical structure of defending and attacking arguments is created. 4) When the discussion is resolved, the content of ''all'' valid statements is incorporated into the [[information object]]. All resolutions are temporary, and anyone can reopen a discussion. Actually, a resolution means nothing more than a situation where the currently valid statements are included in the actual content of the respective [[information object]].


[[Perspective levels of decision making]]? Do we need this?
===Technical functionalities supporting open assessment===


[[Respect theory]]
'''[[Opasnet]]''' is the web-workspace for making open assessments. The user interface is a wiki and it is quite similar to [[Wikipedia]], although it also have enhanced functionalities for making assessments. One of the key ideas is that all work needed in an assessment can be performed using this single interface. Be it information collection, numerical modelling, [[discussion]]s, statistical analyses on original data, publishing original research results, [[peer review]], organising and distributing tasks within a group, or dissemination of results to decision-makers, it's all there and it's all available for anyone to use and participate. [[Opasnet]] is an overall name for many other functionalities than the wiki, but because the wiki is the interface for users, [[Opasnet]] is often used as a synonym for the Opasnet wiki. Other major functionalities exist as well, and they are presented next. The main article about this topic is [[Opasnet structure]].


[[Peer rating]] [[Rating bar]]
Most variables have numerical values as their [[result]]s. Often these are uncertain and they are expressed as probability distributions. A web page is an impractical place to store and handle this kind of information. For this purpose, a database called '''[[Opasnet Base]]''' is used. It is a very flexible storage, and almost any results that can be expressed as two-dimensional tables can be stored in [[Opasnet Base]]. Results of a [[variable]] can be retrieved from the respective [[Opasnet]] page. [[Opasnet]] can be used to upload new results into the database. And finally, if one [[variable]] B is causally dependent on [[variable]] A, the result of A can be automatically retrieved from [[Opasnet Base]] and used in the [[formula]] for calculating B.


[[Open participation]]
Because [[Opasnet Base]] contains samples of distributions of variables, it is actually one huge [[Bayesian belief network]], which can be used for assessment-level analyses and conditioning and optimising different decision options. In addition to finding optimal decision options, [[Opasnet Base]] can be used to assess the value of further information for a particular decision. This statistical [[method]] is called ''[[Value of information]]''.


[[Falsification]]
[[Opasnet]] contains '''modelling functionalities''' for numerical models. It is an object-oriented functionality based on [[R]] statistical software and the results in [[Opasnet Base]]. Each [[information object]] (typically a [[variable]]) contains a [[formula]] which has detailed instructions about how its [[result]] should be computed, often based on results of upstream [[variable]]s in a model.


[[Darm]]
===Meta level functionalities===


In addition to work and discussions about the actual topics related to real-world decision-making, there is also a meta level in [[Opasnet]]. Meta level means that there are discussions and work ''about'' the contents of [[Opasnet]]. The most common thing to see about meta level are the '''[[rating bar]]s''' in the top right corner of many [[Opasnet]] pages. '''[[Peer rating]]''' means that users are requested to evaluate the scientific quality and usefulness of that page on an axis from 0 to 100. This information can then be used by the assessors to evaluate which parts of an assessment require more work, or by readers who want to know whether the presented estimates are reliable for their own purpose.


==Why is open assessment a revolutionary method?==
The users are also allowed to make '''[[peer review]]s''' of pages. These are similar to [[peer review]]s in scientific journals with written evaluations of the scientific quality of content. Another form of written evaluation is '''[[acknowledgements]]''', which is a description about who has contributed what to the page, and what fraction of the merit should be given to which contributor.


There are several things that are done differently, and arguably better, in open assessment compared with traditional ways of collecting information. These are briefly listed here and then described in more detail.
Estimates of scientific quality, peer reviews and acknowledgements can be used to systematically calculate how much each contributor has done in [[Opasnet]]. These practices are not yet, however, well developed. [[Special:ContributionScores|Contribution scores]] are so far the only systematic method to even roughly estimate contributions quantitatively.
# Open assessment can be applied to most decision-making situations.
# Open assessment helps to focus on relevant issues.
# Important issues are explicated.
# The expression of values is encouraged.
# It becomes more difficult to promote non-explicated values, i.e. hidden agendas.
# Open assessment focuses on primary issues and thus gives little emphasis on secondary issues.
# Open assessment separates the policy-making (developing and evaluating potential decision options) and the actual decision-making (making of the decision by the authoritative body).
# Open assessment breaks the information monopoly of the authoritative body and motivates participation.
# Open assessment makes the information collection quicker and easier.
# Open assessment is based on the [[scientific method]].
# Open assessment does not prevent the use of any previous methods.


'''[[Respect theory]]''' is a [[method]] for estimating the value of freely usable [[information object]]s to a [[group]]. This method is under development, and hopefully it will provide practical guidance for distributing merit among contributors in [[Opasnet]].


''' Open assessment can be applied to most decision-making situations.


''' Open assessment helps to focus on relevant issues.


The assessment work boils down to answering the assessment question. Whatever helps in answering is useful, and whatever does not help is useless. You can always ask a practical question from a person who suggests additional tasks: "How would this task help us in answering the question?" It is a simple question to ask, but a difficult one to answer by a bureaucrat.
===Why does open assessment work?===


''' Important issues are explicated.
Most people think that it is unbelievable that open assessment works (i.e. they don't believe that it could work). So far, there are only small-scale evidence showing things like "There exists people who have been able to find a suitable topic to make an open assessment that has converged to a conclusion without falling apart due to attacks (as there were no attacks to mention)." This is still very far (substance-wise) from conclusive evidence about my statement, although I hope that time-wise we are not very far from the future where that conclusive evidence exists. I will now present my statement and then attempt to defend it with current understanding of making assessments, nature of information, and behavour of human beings.


The key part of an assessment product is an answer to the assessment question. The question can easily be falsified, unless it is explicitly defended by relevant arguments found in the assessment. This simple rule forces the assessment participants to explicate all issues that might be relevant for the end users when they evaluate the acceptability of the answer.
'''Open assessment or approaches adopting similar principles will take over a major part of information production motivated by societal needs and improvement of societal decision-making.''' The main overall defence for this statement is economic: open assessment is cheaper to perform, easier to utilise, and of higher quality than current alternative methods to produce societally important information. There are several reasons to believe in this, at least that it would work with the most important issues.
 
* In all assessments, there is a lack of resources, and this limits the quality of the outcome. With important (and controversial) topics, opening up an assessment to anyone will bring new resources to the assessment in the form of interested volunteers.  
''' The expression of values is encouraged.
** The rules of open assessment make it easy enough to organise the increased amount of new data (which may at some points be of low quality) into high-quality syntheses within the limits of new resources.
 
* Especially problems due to too narrow initial scoping will become less common with more eyes looking at the topic.
Valuations are used to optimise decisions. Implicit values are not used to make conclusions. Therefore, if you don't like Chinese food, you must express this value in the lunch place assessment, otherwise the value is ignored. Any values can be included, and these will be taken into account.
* It becomes easier to systematically apply the basic principles of the [[scientific method]], namely rationale, observations, and especially open criticism.
 
* Any information organised for any previous assessment is readily available for a new assessment related to the same topic. The work time for data collection and also calendar time from data collection to utilisation get shorter. This increases efficiency.
''' It becomes more difficult to promote non-explicated values, i.e. hidden agendas.
* All information is organised in a standard format which makes it possible to develop powerful standardised methods for data mining and manipulation and consistency checks.
 
* It is technically easy to prevent malevolent attacks against the content of an assessment without restricting the discussion about and improvement of the content in any way; the resolutions from the discussions are simply updated to the actual content by a trusted moderator.
''' Open assessment focuses on primary issues and thus gives little emphasis on secondary issues.
 
Secondary issues include preparation committees, meeting minutes and so on. After all, all these secondary issues are only needed to get answers to the questions. Instead of trying to get into the committee, participate in the discussions, and write the minutes, you just go to your assessment page and write down your suggestions, and you are done. Of course you can still participate in any meetings to stimulate your thinking, but many people have experience on meetings that take a lot of working time without giving any stimulation.  
 
''' Open assessment separates the policy-making (developing and evaluating potential decision options) and the actual decision-making (making of the decision by the authoritative body).
 
Open assessment opens the policy-making so that anyone can participate and bring in their information.
 
''' Open assessment breaks the information monopoly of the authoritative body and motivates participation.
 
The authoritative body can still make the decision just like before. But this body has no say over issues that will be included in an open assessment. The inclusion or exclusion of issues depends only on relevance, which ultimately depends only on the assessment question.
 
''' Open assessment makes the information collection quicker and easier.
 
''' Open assessment is based on the scientific method.
 
''' Open assessment does not prevent the use of any previous methods.
 
Open assessment does not preclude any methods. It only determines the end product (a hierarchical thread of questions and answers, starting from the main question of the assessment), and how individual pieces of information are evaluated (based on relevance, logic, and coherence with observations). Of course, some traditional methods will perform poorly in these conditions, but those who insist to use them are free to do so.




Line 116: Line 88:
* [[Opasnet Base]]
* [[Opasnet Base]]
* [[Discussion structure]] [[Discussion method]] [[Discussion]]  
* [[Discussion structure]] [[Discussion method]] [[Discussion]]  
* [http://en.opasnet.org/en-opwiki/index.php?title=Open_assessment&oldid=17193 A previous version] containing topics ''Basics of Open assessment'' and ''Why is open assessment a revolutionary method?''.
* [[Falsification]]
==Keywords==
==Keywords==



Revision as of 13:58, 31 December 2010


For a description about a web-workspace related to open assessment, see Opasnet.

<section begin=glossary />

Open assessment is a method that attempts to answer the following research question and to apply the answer in practical assessments: How can scientific information and value judgements be organised for improving societal decision-making in a situation where open participation is allowed?
In practice, the assessment processes are performed using Internet tools (notably Opasnet) among traditional tools. Stakeholders and other interested people are able to participate, comment, and edit its contents already since an early phase of the process. Open assessment is based on a clear information structure and scientific method as the ultimate rule for dealing with disputes. Open assessments explicitly include value judgements, which approach spreads the use of open assessment outside the traditional area of risk assessment, to risk management area. However, value judgements go through the same open criticism as scientific claims; the main difference is that scientific claims are based on observations, while value judgements are based on opinions of individuals.

<section end=glossary />

Open assessment can also refer to the actual making of such an assessment (precisely: open assessment process), or the end product of the process (precisely: open assessment product or report). Usually, the use of the term open assessment is clear, but if there is a danger of confusion, the precise term (open assessment method, process, or product) should be used.

Open assessment as a methodology

Open assessment is built on several different methods and principles that together make a coherent system for collecting, organising, synthesising, and using information. These methods and principles are briefly summarised here. A more detailed rationale about why exactly these methods are used and needed can be found from Open assessment method. In addition, each method or principle has a page of its own in Opasnet.

Purpose

An example of an assessment.

The basic idea of open assessment is to collect information that is needed in a decision-making process. The information is organised as an assessment that predicts the impacts of different decision options on some outcomes of interest. Information is organised to the level of detail that is necessary to achieve the use purpose of informing decision-makers. An assessment is typically a quantitative model about relevant issues causally affected by the decision and affecting the outcomes. Decisions, outcomes, and other issues are modelled as separate parts of an assessment, called variables. In practice, assessments and variables are web pages in Opasnet, a web-workspace dedicated for making such assessments. Such a web page contains all information (text, numerical values, and software code) needed to describe and actually run that part of an assessment model.

Basic concepts

These web pages are also called information objects, because they are the standard way of handling information as chunk-size pieces in open assessments. Each object (or page) contains information about a particular issue. Each page also has the same, universal structure: a research question (what is the issue?), rationale (what do we know about the issue?), and result (what is our current best answer to the research question?). The descriptions of these issues are built on a web page, and anyone can participate in reading or writing just like in Wikipedia. Notably, the outcome is owned by everyone and therefore the original authors or assessors do not possess any copyrights or rights to prevent further editing.

The structure of information objects is like fractal: an object with a research question may contain sub-questions that could be treated as separate objects themselves, and a discussion about a topic could be divided into several smaller discussions about sub-topics. For example, there is a variable called Population of Europe with the result indexed by country. Instead, this information could have been divided into several smaller population variables, one for each country. Indeed, there is also a variable called Population of Finland. How information is divided or aggregated into variables is a matter of taste and practicability and there are no objective rules for that. Instead, the rules only state that if there are two overlapping variables, the information in them must be coherent. In theory, there is no limit to how detailed a scope of an information object can be.

Trialogue is the word used about Wikipedia-like contributing. The trialogue concept emphasises that in addition to having a dialogue or discussion, a major part of the communication and learning between the individuals in a group happens via information objects, in this case Opasnet pages. In other words, people not only talk or read about a topic but they actually contribute to an information object that represents the shared understanding of the group. Wikipedia is a famous example of trialogical approach although the wikipedists do not use this word.

An example of a variable.

Groups are crucial in open assessment thinking because all research questions are (implicitly) transformed into questions with this format: "What can we as a group know about issue X?" The group considering a particular issue may be explicitly described, but it may also be implicit. In the latter case, it typically means anyone who wants to participate, or alternatively, the whole mankind.

The use purpose of information is crucial because that is the fuel of assessments. Nothing is done just for fun (although that is a valid motivation as well) but because the information is needed for some practical, explicit use. Of course, also other assessments are done to inform decisions, but open assessments are continuously being evaluated against the use purpose; this is used to guide the assessment work, and the assessment is finished as soon as the use purpose is fulfilled.

Open assessment attempts to be a coherent methodology. Everything in the open assessment methodology, as well as in all open assessment processes, is accepted or rejected based on observations and reasoning. However, there are a few things that cannot be verified using observations, and these are called axioms of open assessment. The six axioms are the following: 1) The reality exists. 2) The reality is a continuum without e.g. sudden appearances or disappearances of things without reason. 3) I can reason. 4) I can observe and use my observations and reasoning to learn about the reality. 5) Individuals (like me) can communicate and share information about the reality. 6) Not everyone is a systematic liar.

Basic procedures

Inference rules are used to decide what to believe. The rules are summarised here. 1) Anyone can promote a statement about anything (promote = claim that the statement is true). 2) A promoted statement is considered valid unless it is invalidated (i.e., convincingly shown not to be true). 3) Uncertainty about whether a statement is true is measured with subjective probabilities. 4) The validity of a statement is always conditional to a particular group of people. 5) A group can develop other rules than these inference rules (mathematics and laws of physics are examples of highly accepted rules, but also disputed rules can be used) for deciding what to believe. 6) If two people within a group promote conflicting statements, the a priori belief is that each statement is equally likely to be true. 7) A priori beliefs are updated into a posteriori beliefs based on observations and open criticism that is based on shared rules. In practice, this means the use of scientific method.

Tiers of open assessment process describe typical phases of work when an open assessment is performed. The tiers are the following: Tier I: Definition of the use purpose and scope of an assessment. Tier II: Definition of the decision criteria. Tier III: Information production. It is noteworthy that the three tiers closely resemble the first three phases of IEHIA, but the fourth phase (appraisal) is not a separate tier in open assessment. Instead, appraisal and information use happens at all tiers as a continuous and iterative process. In addition, the tiers have some similarities also to BRAFO approach, although the tiers in these approaches are not the same.

Open assessments contain two kinds of statements: scientific (what is?) and moral (what should be?) statements. Moral statements are developed using the morality game. The most important rules of the morality game are the following: 1) Observations are used as the starting point when evaluating the validity of scientific statements. 2) Opinions are used as the starting point when evaluating the validity of moral statements. 3) In addition to the starting points, the validity of statements is evaluated using open criticism and scientific method. 4) Scientific statements must be coherent with each other everywhere. 5) Moral statements must be coherent with each other when applied within a particular group. 6) Moral statements may be conflicting between two groups unless they share members to which conflicting norms apply.

It is clear that within a self-organised group, not all people agree on all scientific or moral statements. The good news is that it is neither expected nor hoped for. There are strong but simple rules to resolve disputes, namely rules of structured discussions. In straightforward cases, discussions can be informal, but with more complicated or heated situations, the discussion rules are followed. 1) Each discussion has one or more statements as a starting point. The validity of the statements is the topic of the discussion. 2) A statement is valid unless it is attacked with a valid argument. 3) Statements can be defended or attacked with arguments, which are themselves treated as statements of smaller discussions. Thus, a hierarchical structure of defending and attacking arguments is created. 4) When the discussion is resolved, the content of all valid statements is incorporated into the information object. All resolutions are temporary, and anyone can reopen a discussion. Actually, a resolution means nothing more than a situation where the currently valid statements are included in the actual content of the respective information object.

Technical functionalities supporting open assessment

Opasnet is the web-workspace for making open assessments. The user interface is a wiki and it is quite similar to Wikipedia, although it also have enhanced functionalities for making assessments. One of the key ideas is that all work needed in an assessment can be performed using this single interface. Be it information collection, numerical modelling, discussions, statistical analyses on original data, publishing original research results, peer review, organising and distributing tasks within a group, or dissemination of results to decision-makers, it's all there and it's all available for anyone to use and participate. Opasnet is an overall name for many other functionalities than the wiki, but because the wiki is the interface for users, Opasnet is often used as a synonym for the Opasnet wiki. Other major functionalities exist as well, and they are presented next. The main article about this topic is Opasnet structure.

Most variables have numerical values as their results. Often these are uncertain and they are expressed as probability distributions. A web page is an impractical place to store and handle this kind of information. For this purpose, a database called Opasnet Base is used. It is a very flexible storage, and almost any results that can be expressed as two-dimensional tables can be stored in Opasnet Base. Results of a variable can be retrieved from the respective Opasnet page. Opasnet can be used to upload new results into the database. And finally, if one variable B is causally dependent on variable A, the result of A can be automatically retrieved from Opasnet Base and used in the formula for calculating B.

Because Opasnet Base contains samples of distributions of variables, it is actually one huge Bayesian belief network, which can be used for assessment-level analyses and conditioning and optimising different decision options. In addition to finding optimal decision options, Opasnet Base can be used to assess the value of further information for a particular decision. This statistical method is called Value of information.

Opasnet contains modelling functionalities for numerical models. It is an object-oriented functionality based on R statistical software and the results in Opasnet Base. Each information object (typically a variable) contains a formula which has detailed instructions about how its result should be computed, often based on results of upstream variables in a model.

Meta level functionalities

In addition to work and discussions about the actual topics related to real-world decision-making, there is also a meta level in Opasnet. Meta level means that there are discussions and work about the contents of Opasnet. The most common thing to see about meta level are the rating bars in the top right corner of many Opasnet pages. Peer rating means that users are requested to evaluate the scientific quality and usefulness of that page on an axis from 0 to 100. This information can then be used by the assessors to evaluate which parts of an assessment require more work, or by readers who want to know whether the presented estimates are reliable for their own purpose.

The users are also allowed to make peer reviews of pages. These are similar to peer reviews in scientific journals with written evaluations of the scientific quality of content. Another form of written evaluation is acknowledgements, which is a description about who has contributed what to the page, and what fraction of the merit should be given to which contributor.

Estimates of scientific quality, peer reviews and acknowledgements can be used to systematically calculate how much each contributor has done in Opasnet. These practices are not yet, however, well developed. Contribution scores are so far the only systematic method to even roughly estimate contributions quantitatively.

Respect theory is a method for estimating the value of freely usable information objects to a group. This method is under development, and hopefully it will provide practical guidance for distributing merit among contributors in Opasnet.


Why does open assessment work?

Most people think that it is unbelievable that open assessment works (i.e. they don't believe that it could work). So far, there are only small-scale evidence showing things like "There exists people who have been able to find a suitable topic to make an open assessment that has converged to a conclusion without falling apart due to attacks (as there were no attacks to mention)." This is still very far (substance-wise) from conclusive evidence about my statement, although I hope that time-wise we are not very far from the future where that conclusive evidence exists. I will now present my statement and then attempt to defend it with current understanding of making assessments, nature of information, and behavour of human beings.

Open assessment or approaches adopting similar principles will take over a major part of information production motivated by societal needs and improvement of societal decision-making. The main overall defence for this statement is economic: open assessment is cheaper to perform, easier to utilise, and of higher quality than current alternative methods to produce societally important information. There are several reasons to believe in this, at least that it would work with the most important issues.

  • In all assessments, there is a lack of resources, and this limits the quality of the outcome. With important (and controversial) topics, opening up an assessment to anyone will bring new resources to the assessment in the form of interested volunteers.
    • The rules of open assessment make it easy enough to organise the increased amount of new data (which may at some points be of low quality) into high-quality syntheses within the limits of new resources.
  • Especially problems due to too narrow initial scoping will become less common with more eyes looking at the topic.
  • It becomes easier to systematically apply the basic principles of the scientific method, namely rationale, observations, and especially open criticism.
  • Any information organised for any previous assessment is readily available for a new assessment related to the same topic. The work time for data collection and also calendar time from data collection to utilisation get shorter. This increases efficiency.
  • All information is organised in a standard format which makes it possible to develop powerful standardised methods for data mining and manipulation and consistency checks.
  • It is technically easy to prevent malevolent attacks against the content of an assessment without restricting the discussion about and improvement of the content in any way; the resolutions from the discussions are simply updated to the actual content by a trusted moderator.


See also

Keywords

References


Related files

<mfanonymousfilelist></mfanonymousfilelist>