Improving the result of a variable

From Opasnet
Revision as of 06:16, 28 September 2008 by Jouni (talk | contribs) (first draft based on own thinking)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Improving the result of the variable is a description of the process on how the variable result (i.e., the estimate of the real-world phenomenon) should be improved as the assessment work progresses.

Scope

How should the variable result be improved?

Rationale

Much cannot be said about the result of a variable before the scope of the variable is clearly described. However, often it is the case that the scope is very unclear but something is said about the result. Then, the result should be used as a tool for a better understanding of the scope. This seems to be human nature: people like to talk about some real-world phenomena without clearly thinking about what the phenomenon actually is. They also talk about impacts of one thing to another thing without clearly defining the things nor the impacts.

Therefore, the assessor should first try and understand what the variables actually are that are under discussion.

Then, the assessor should understand their position in the causal net of the assessment. The logic of the net may affect on how variable scopes should be defined.

The aim is to get the model structure to a reasonable state quickly because there is no need to spend time in finding much data for the variables in the beginning; and to become soon able to run a VOI analysis to see whether there is a need to improve the variable. In contrast, if you start with the usual alternative approach, namely using point estimates, you will have no idea about the uncertainties of the variables and you cannot perform a VOI analysis. Sensitivity analyses are traditionally recommended. The most crude probability estimates actually are only sensitivity analyses, but even then they have a probabilistic inference (i.e. the lowest and highest plausible values).

The model structure is more important in the beginning than the variable estimates. Without the structure, it is difficult to evaluate the logic of the model.

Procedure

  1. Understand what the variable is about.
  2. Check that the following properties are coherent within the assessment:
    1. Position and connections of the variables in the causal net (causality test).
    2. Clarity of the variables (does it pass the clairvoyant test?
    3. Units of the variables (unit test).
  3. Define the domains of the variables.
  4. If a variable is an upstream variable (no causal connections to it) and nothing much is known about it, define the result as a probability distribution P(min(domain))=p, p=0.5, and P(max(domain))=1-p. This maximises the uncertainty the variable may have. In addition, it is coherent with a decision analytic theorem that if you have a situation with two uncertain outcomes and you have no idea about the probability of each, you should assume probability 0.5 for each (REF).
  5. When the whole assessment model is described in a crude way, run a VOI analysis.
  6. Spend your time on those variables that show up on the top in the VOI analysis.
  7. Repeat the previous two points until one of the following happens:
    1. You find an answer to the question of the assessment.
    2. You run out of data that could improve your model.
    3. You run out of resources (time or money).