Obstacles and drawbacks hampering the use of Open Risk Assessment

From Opasnet
Jump to: navigation, search

This page is a collection of reasons (methodological, technical, practical, psychological, and so on) that from experience are known/deemed to hamper the use of open assessment (OA).

GOAL: By expressing the problems and recognizing the most urgent ones, this collection should be useful for aiding the further development of OA and its wider acceptance and usage.

Suggestions for solutions will also be presented on this page.

Classification of obstacles

The various problems can be characterized using at least three important dimensions. Analyzing each problem in terms of these dimensions should make it easier to find suitable remedies.

Main axis: Degree of experience

Assumed difficulties/drawbacks (without first hand experience)

Remedies:

  • identify false prejudices and misconceptions on OA → correct them
  • identify lack of information on OA → provide more information
  • provide possibilites for easily testing OA in practice
  • demonstrate the usefulness of OA using a real case

Known difficulties/drawbacks (shown by experience)

Remedies:

  • identify the difficulties/drawbacks → try to develop OA further

Axis: Underlying principles vs technical implementation

Disagreement on the underlying principles

E.g. the benefits of open participation; theoretical possibilities for dispute resolution; etc.

Remedies:

  • identify, discuss, and analyze the issues → justify OA on a scientifically sound basis

Practical difficulties/drawbacks associated with the implementation or application of the OA method

Remedies:

  • identify the difficulties/drawbacks → try to develop OA further

Axis: Cognitive vs emotional

Cognitive component dominating

  1. disagreements of essentially theoretical nature
  2. more technical problems

Remedies:

  1. identify, analyze, and discuss the issue → try to convince the opponents
  2. identify → solve

Emotional component dominating

Fears of potential losses

  • plagiarism
  • loss of scientific advantage (due to revealing one's information/knowledge; due to the time invested into OA)
  • loss of authority/credibility (presenting preliminary/erroneous information; learning new computer tools)
  • decrease of control (due to open participation)
  • losses of time/money/effort (due to the time & effort needed by an OA)

Remedies:

  • identify the fears
  • refute ungrounded fears
  • for justified fears, assess the realistic magnitude of the consequences
  • emphasize the advantages of OA (for a balance)

Inertia

  • i.e. the reluctancy to change the prevailing practices (due to the actual effort needed)

Remedies:

  • as an incentive, need to demonstrate the gains of OA by a real case

Experiences from the Helsinki CCZ case

Helsinki CCZ is a case study about congestion charge zones in Helsinki. It was performed in Intarese project. We have used this experience in an end user evaluation in Beneris project, because many of the issues dealt with here are not specific to the actual topic.

Practical inconvenience and additional work due to breaking apart an assessment into separate wikipages

At the starting point, the Helsinki workplan was a single contiguous (bulleted) text, making it easy to scroll and edit, to print/export the text for reports, to get a quick overview of the extent and stage of work, and to see the entire hierarchy of titles and subtitles (in the automatic contents).

Breaking the text apart into separate variables (wikipages) will complicate/prevent the above tasks, especially:

  1. creating a need to jump back and forth between many wikipages
  2. requiring additional work for printing/exporting the separate pages for reports
  3. making it more laborious to estimate the stage of completion

Alleviation to #1: The "Use breadcrumbs" option in the wiki-user preferences makes it easy to return to previous pages.

Concerns about publication rights and authorship

Insofar as scientific publications are planned to be written about (or based on) this impact assessment (IA), open participation creates some worries/concerns about publication rights:

  • who will be the authors of the publication(s), if a large number of persons will contribute to the IA, but with greatly differing levels of contribution?
  • can one be confident that parts of the unpublished work (e.g. methods, structure, ideas, etc.) will not become subject to plagiarism, because of the open participation? (even when the participation is restricted, the information might spread further)

Reluctance for revealing erroneous/preliminary work to other researchers

Due to the extent of the Helsinki IA, the novelty of most of the contents for the main author, and the limitations of time, some of the (early) contents is bound to be erroneous or poorly thought-of, making it somewhat unpleasant to reveal the contents to fellow researchers.

Partial solution: All participants need to be prepared to take a fruitful attitude for making IAs. New IAs are never complete from the start. When inviting new participants to join an IA, one must inform them about the gradually evolving nature of the work, as well as of the resource constraints. In early stages of the assessment, the presence (and even predominance) of gaps, inaccuracy, and errors should not be viewed as demerits, but rather as the natural starting point for joint elaboration. After all, this is a key argument for open participation. Furthermore, due to the interlinked requirements for data and sub-models, and the many practical uncertainties (such as the availability of data of various quality levels and from various sources), IAs often also exhibit an iterative character, which may require several rounds of revision and re-definition for some parts of the assessment. Thus, preliminary and temporary choices will likely be necessary, especially in the early stages of the assessment.

How would the formal argumentation method work in practice?

Most risk assessments of practical value are subject to a huge number of potential matters of dispute. This raises the worry about many practical problems:

  • simply due to the lack of time, many disputes may even remain totally unaddressed (apart from the participant that first raised the dispute)
  • the number of disputes may become so high that no single researcher may address all of the disputes
    • ----': . Practical suggestion: is there a tool reporting all open disputes of a given IA? --Erkki Kuusisto 16:46, 18 January 2008 (EET) (type: truth; paradigms: science: comment)
  • even when a dispute has been addressed by several participants, it may fail to be properly settled because of:
    • a persistent disagreement between two or more opinions (possibly manifest as an "edit war") → how should one deal with situations like this?
    • lacking resources, resulting in disputes being dropped/forgotten/left open
  • some disputes may seem to be settled (due to "authority opinion"), but actually remain unsettled. This situation may result if the dispute involves known authorities/specialists in the field, possibly deterring some participants from expressing their discrepant opinion, however well thought-of.

A major weakness of the formal argumentation method may be that it relies on rational thinking, and written explication of the problem.

  • While rational thinking and explication are useful methods for many purposes, they are also tremendously slow as compared to the "intuition-supported decision-making". The latter is the method being used in the overwhelming majority of practical decision situations - even most of those taken by an expert. "Intuitive thinking" can rapidly incorporate vast amounts of multidimensional information, without an explicit analysis of the internal workings of this information.
  • In contrast, the formal argumentation necessitates "writing open" the justification of the opinion, which may amount to writing large amounts of text - taking a lot of time. → The worry is that people may therefore prefer not to be involved in disputes, even if they would have the best opinion (and would know how to justify it).

How do computer sub-models fit into the OA variable structure?

Before starting the conversion to OA structure, my impression is that the variable structure, especially the "Definition" attribute, is mostly oriented towards fitting relatively simple mathematical functions.

It is not immediately clear how useful this structure is for large, dedicated (often commercial) computer models, such as a traffic model and dispersion models. However, such models constitute the major part of the Helsinki-case model chain.

Conclusion: Need to test this.