Peer review: Difference between revisions

From Opasnet
Jump to navigation Jump to search
(improvements based on discussions with Mikko and own futher thinking)
Line 49: Line 49:
Peer review can be performed for the result, if the peer has an alternative way of deriving the result. Then, it can be used as an external reference for evaluating the discrepancy between the result and the external reference. Of course, the validity of this review is totally dependent on the validity of the external reference. [[Informativeness]] and [[calibration]] can be evaluated against the reference.  
Peer review can be performed for the result, if the peer has an alternative way of deriving the result. Then, it can be used as an external reference for evaluating the discrepancy between the result and the external reference. Of course, the validity of this review is totally dependent on the validity of the external reference. [[Informativeness]] and [[calibration]] can be evaluated against the reference.  


In addition, a '''discrepancy test''' can be performed. The aim of this test is to evaluate whether it is credible to believe the result of a [[value of information]] (VOI) analysis related to the variable. The VOI analysis gives low values if the need to improve the model is low. A problem with the VOI analysis is that if the model is too bad, it might also give low VOI estimates, thus falsely implying a good model. Fortunately there are a few indirect methods to evaluate this. One way is to do a peer review of the definition. Another one is to do a discrepancy test. It measures whether the external reference is essentially included in the current result of the variable. If this is the case, it is unlikely that the VOI will be underestimated.  
In addition, a '''discrepancy test''' can be performed. The aim of this test is to evaluate whether it is credible to believe the result of a [[value of information]] (VOI) analysis related to the variable. The VOI analysis gives low values if the need to improve the model is low. A problem with the VOI analysis is that if the model is too bad, it might also give low VOI estimates, thus falsely implying a good model. Fortunately there are a few indirect methods to evaluate this. One way is to do a peer review of the definition. Another one is to do a discrepancy test. It measures whether the external reference is essentially included in the current result of the variable. If this is the case, it is unlikely that the VOI will be underestimated. the [[:en:Kolmogorov–Smirnov test|Kolmogorov–Smirnov test]] is a relevant discrepancy test.


In the numerical VOI analysis, the result distribution is divided into n equally probable bins. The discrepancy test asks, what the probability is that the result is in a higher (lower) bin than the external reference. If both probabilities are fairly high, it is unlikely that the result is falsely too narrow and biased. What is "fairly high", remains to be determined.
In the numerical VOI analysis, the result distribution is divided into n equally probable bins. The discrepancy test asks, what the probability is that the result is in a higher (lower) bin than the external reference. If both probabilities are fairly high, it is unlikely that the result is falsely too narrow and biased. What is "fairly high", remains to be determined.

Revision as of 14:24, 23 January 2009


This page is about peer review in open assessment. For other uses, see Peer review in Wikipedia.

<section begin=glossary />

Peer review in open assessment is a method for evaluating uncertainties that are not explicitly captured in the definition of an object (typically an assessment or a variable). Technically, it is a discussion on the Talk page of the object and has the following statement:
"The definition of this object is based on the state-of-the-art scientific knowledge and methods. The data used is representative and unbiased. The causalities are described in a well-founded way. The formula correctly describes how the result can be calculated based on the data and causalities. Overall, the information in the definition reflects the current scientific understanding and is unlikely to substantially change because of some existing information that is omitted."

<section end=glossary />

Scope

What is such a method for gaining acceptance to an object from the scientific community that fulfils the following criteria?

  • It is based on an evaluation of the object by peer researchers.
  • It is not in conflict with open assessment.
  • It has predictive value about whether major parts of the object result are likely to be falsified (or shown to be falsely falsified) using information available at the time of the peer review.

Definition

Input

The input is an object to-be-evaluated.

Output

The output is a statement about the quality of content of the data, causalities, and formula attributes.

Rationale

Result

Procedure

Peer review of the definition

Peer review in open assessment is a method for evaluating uncertainties that are not explicitly captured in the definition of an object (typically an assessment or a variable). Technically, it is a discussion on the Talk page of the object and has the following statement:

"The definition of this object is based on the state-of-the-art scientific knowledge and methods. The data used is representative and unbiased. The causalities are described in a well-founded way. The formula correctly describes how the result can be calculated based on the data and causalities. Overall, the information in the definition reflects the current scientific understanding and is unlikely to substantially change because of some existing information that is omitted."

The following classification can be used for each attribute:

  • The attribute description is according to the state-of-the-art.
  • The attribute description has minor deficiencies.
  • The attribute description is unreliable because of its major deficiencies.
  • Cannot be evaluated.


Peer review of the result based on an external reference

Peer review can be performed for the result, if the peer has an alternative way of deriving the result. Then, it can be used as an external reference for evaluating the discrepancy between the result and the external reference. Of course, the validity of this review is totally dependent on the validity of the external reference. Informativeness and calibration can be evaluated against the reference.

In addition, a discrepancy test can be performed. The aim of this test is to evaluate whether it is credible to believe the result of a value of information (VOI) analysis related to the variable. The VOI analysis gives low values if the need to improve the model is low. A problem with the VOI analysis is that if the model is too bad, it might also give low VOI estimates, thus falsely implying a good model. Fortunately there are a few indirect methods to evaluate this. One way is to do a peer review of the definition. Another one is to do a discrepancy test. It measures whether the external reference is essentially included in the current result of the variable. If this is the case, it is unlikely that the VOI will be underestimated. the Kolmogorov–Smirnov test is a relevant discrepancy test.

In the numerical VOI analysis, the result distribution is divided into n equally probable bins. The discrepancy test asks, what the probability is that the result is in a higher (lower) bin than the external reference. If both probabilities are fairly high, it is unlikely that the result is falsely too narrow and biased. What is "fairly high", remains to be determined.

Management

The peer review discussion has the following form:

Peer review

How to read discussions

Fact discussion: .
Opening statement:

Closing statement: Resolution not yet found.

(A closing statement, when resolved, should be updated to the main page.)

Argumentation:

←--1: . The data used is representative and unbiased. --Jouni 11:37, 16 January 2009 (EET) (type: truth; paradigms: science: defence)

←--2: . The causalities are described in a well-founded way. --Jouni 23:04, 19 January 2009 (EET) (type: truth; paradigms: science: defence)

⇤--6: . Attack these arguments if necessary. --Jouni 23:04, 19 January 2009 (EET) (type: truth; paradigms: science: attack)

←--3: . The formula correctly describes how the result can be calculated based on the data and causalities. --Jouni 23:04, 19 January 2009 (EET) (type: truth; paradigms: science: defence)

⇤--5: . The issue described in argument 4 is missing. --Jouni 11:37, 16 January 2009 (EET) (type: truth; paradigms: science: attack)

←--4: . The issue of ...(describe the issue here)... is important and relevant for this object. --Jouni 11:37, 16 January 2009 (EET) (type: truth; paradigms: science: defence)

See also

References