From open assessment to shared understanding: practical experiences: Difference between revisions

From Opasnet
Jump to navigation Jump to search
(article citation added)
 
(211 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{nugget|moderator=Jouni}}
{{nugget|moderator=Jouni}}


'''From open assessment to shared understanding: practical experiences''' is a manuscript of a scientific article. The main point is to offer a comprehensive summary of the methods developed at THL/environmental health to support informed societal decision making, and evaluate their use and usability in practical examples in 2006-2017. Manuscript is to be submitted to BMC Public Health as a technical advance article [https://bmcpublichealth.biomedcentral.com/submission-guidelines/preparing-your-manuscript/technical-advance-article].
'''From insight network to open policy practice: practical experiences''' is a manuscript of a scientific article. The main point is to offer a comprehensive summary of the methods developed at THL/environmental health to support informed societal decision making, and evaluate their use and usability in practical examples in 2006-2018. The manuscript was published in 2020:
 
Tuomisto, J.T., Pohjola, M.V. & Rintala, T. From insight network to open policy practice: practical experiences. Health Res Policy Sys 18, 36 (2020). https://doi.org/10.1186/s12961-020-00547-3


'''Title page
'''Title page


* Title: From open assessment to shared understanding: practical experiences
'''From insight network to open policy practice: practical experiences
* List full names, institutional addresses and emails for all authors
 
** Jouni T. Tuomisto<sup>1</sup>, Mikko Pohjola<sup>2</sup>, Arja Asikainen<sup>1</sup>, Päivi Meriläinen<sup>1</sup>, Teemu Rintala<sup>3</sup>.
Short title: From insight network to open policy practice
**# National Institute for Health and Welfare, P.O.Box 95, 70701 Kuopio, Finland
 
**# Santasport, Rovaniemi, Finland
Jouni T. Tuomisto<sup>1*</sup> ORCID 0000-0002-9988-1762, Mikko Pohjola<sup>1,2</sup> 0000-0001-9006-6510, Teemu Rintala<sup>1,3</sup> ORCID 0000-0003-1849-235X.
**# Aalto University, Espoo, Finland
 
** emails: jouni.tuomisto[]thl.fi, mikko.pohjola[]santasport.fi, arja.asikainen[]thl.fi, paivi.merilainen[]thl.fi, teemu.rintala.a[]gmail.com
<sup>1</sup> Finnish Institute for Health and Welfare, Kuopio, Finland
** Corresponding author: Jouni Tuomisto
 
<sup>2</sup> Kisakallio Sport Institute, Lohja, Finland
 
<sup>3</sup> Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
 
<sup>*</sup> Corresponding author
 
Email: jouni.tuomisto[]thl.fi


'''TO DO


* Make a coherent ontology in Opasnet: Each term in the ontology should be found in Opasnet with a description. In addition, a specific link should be available to mean the object itself rather than its description. http://en.opasnet.org/entity/...? Make clear what things should link to Opasnet and what to Wikibase (if we get one running soon)
This article describes a decision support method called open policy practice. It has mostly been developed in Finnish Institute for Health and Welfare (THL, Finland) during the last 15 years. Each assessment, case study, and method has been openly described and also typically published in scientific journals. However, this is the first comprehensive summary of open policy practice as a whole (since 2007) and thus gives a valuable overview, rationale, and evaluation for several methodological choices we have made. We have combined methods from several disciplines, including toxicology, exposure sciences, impact assessment, statistical and Bayesian methods, argumentation theory, ontologies, and co-creation to produce a coherent method for scientific decision support.
* Describe OPP using the new ontology and make a new graph out of that. Is this possible/necessary?
* All terms and principles should be described in Opasnet in their own pages. Use italics to refer to these pages.
* Upload Tuomstio 1999 thesis to Julkari. And Paakkila 1999


'''Letter to the Editor
The article is currently under peer review. You can read about the main topics of the article from Opasnet pages [[Open policy practice]], [[Shared understanding]], [[Open assessment]], and [[Properties of good assessment]].


== Abstract ==
== Abstract ==
Line 27: Line 32:
'''Background'''  
'''Background'''  


Evidence-based decision making and better use of scientific information in societal decisions has been an area of development for decades but is still topical. A decision support work can be viewed from the perspective of information collection, synthesis, and flow between decision makers, experts, and stakeholders.
Evidence-informed decision making and better use of scientific information in societal decisions has been an area of development for decades but is still topical. Decision support work can be viewed from the perspective of information collection, synthesis, and flow between decision makers, experts, and stakeholders. Open policy practice is a coherent set of methods for such work. It has been developed and utilised mostly in Finnish and European contexts.


'''Methods'''
'''Methods'''


We give an overview of and describe practical experiences from the open policy practice method that has been developed in National Institute for Health and Welfare for more than a decade. Open assessments are online collaborative efforts to produce information for decision makers by utilising e.g. quantitative models, structured discussions, and knowledge crystals. Knowledge crystal is a web page that has a specific resarch question, and an answer is continually updated based on all available information. Shared understanding is used to motivate decision makers and stakeholders to common dialogue and inform about conclusions and remaining disagreements.
An overview of open policy practice is given, and theoretical and practical properties are evaluated based on properties of good policy support. The evaluation is based on information from several assessments and research projects developing and applying open policy practice and the authors' practical experiences. The methods are evaluated against their capability of producing quality of content, applicability, and efficiency in policy support, as well as how well they support close interaction among participants and understanding of each other's views.


'''Results'''
'''Results'''


Technically the methods and online tools work as expected, as demonstrated by the numerous assessments and policy support processes conducted. The approach improves the availability of information and especially of details. Acceptability of openness is ambivalent among experts: it is an important scientific principle, but it goes against many current publishing practices. However, co-creation and openness are megatrends that are changing decision making and the society at large. Against many experts' fears, open participation has not caused problems in performing high-quality assessments. In contrast, a key problem is to motivate more people, including experts and decision makers, to participate and share their views.
The evaluation revealed that methods and online tools work as expected, as demonstrated by the assessments and policy support processes conducted. The approach improves the availability of information and especially of relevant details. Experts are ambivalent about the acceptability of openness: it is an important scientific principle, but it goes against many current research and decision making practices. However, co-creation and openness are megatrends that are changing science, decision making and the society at large. Against many experts' fears, open participation has not caused problems in performing high-quality assessments. On the contrary, a key challenge is to motivate and help more experts, decision makers, and citizens to participate and share their views. Many methods within open policy practice have also been used widely in other contexts.  


'''Conclusions'''
'''Conclusions'''


Shared understanding has proved to be a useful concept that guides policy processes toward more collaborative approach, whose purpose is wider understanding rather than winning. There is potential for merging open policy practice with other open science and open decision process tools. Active facilitation, community building and improving the user-friendliness of the tools were identified as key solutions for improving usability of the method in the future.
Open policy practice proved to be a useful and coherent set of methods. It guided policy processes toward more collaborative approach, whose purpose was wider understanding rather than winning a debate. There is potential for merging open policy practice with other open science and open decision process tools. Active facilitation, community building and improving the user-friendliness of the tools were identified as key solutions for improving usability of the method in the future.


;Keywords: environmental health, decision support, open assessment, open policy practice, shared understanding, modelling, online tools, policy, method development, evaluation
;Keywords: environmental health, decision support, open assessment, open policy practice, shared understanding, policy making, collaboration, evaluation, knowledge crystal, impact assessment


== Background ==
== Background ==


In this article, we describe and evaluate ''open policy practice'', a set of methods and tools for improving science-based policy making. They have been developed in the National Institute for Health and Welfare (THL, located in Finland) for more than 16 years especially to improve environmental health assessments<sup>a</sup>.
This article describes and evaluates ''open policy practice'', a set of methods and tools for improving evidence-informed policy making. Evidence-informed decision support has been a hot and evolving topic for a long time, and its importance is not diminishing any time soon. In this article, decision support is defined as knowledge work that is performed during the whole decision process (ideating possible actions, assessing impacts, deciding between options, implementing decisions, and evaluating outcomes) and that aims to produce better decisions and outcomes<ref name="pohjola2013">Pohjola M. Assessments are to change the world. Prerequisites for effective environmental health assessment. Helsinki: National Institute for Health and Welfare Research 105; 2013. http://urn.fi/URN:ISBN:978-952-245-883-4. Accessed 1 Feb 2020.</ref>. Here, "assessment of impacts" means ex ante consideration about what will happen if a particular decision is made, and "evaluation of outcomes" means ex post consideration about what did happen after a decision was implemented.


Science-based decision support has been a hot and evolving topic for a long time, and its importance is not diminishing any time soon. The area is complex, and all the key players decision makers, experts, and citizens or other stakeholders all have different views on the process, their own roles in it, and how information should be used in the process. For example, researchers often think of information as a way to find the truth, while politicians see information as one of the tools to promote political agendas ultimately based on values.<ref name="jussila2012">Jussila H. Päätöksenteon tukena vai hyllyssä pölyttymässä? Sosiaalipoliittisen tutkimustiedon käyttö eduskuntatyössä. [Supporting decision making or sitting on a shelf? The use of sociopolitical research information in the Finnish Parliament.] Helsinki: Sosiaali- ja terveysturvan tutkimuksia 121; 2012. http://hdl.handle.net/10138/35919. Accessed 24 Jan 2018. (in Finnish)</ref> Therefore, any successful method should provide functionalities for each of the key groups.  
The area is complex, and the key players decision makers, experts, and citizens or other stakeholders all have different views on the process, their own roles in it, and how information should be used in the process. For example, researchers often think of information as a way to find the truth, while politicians see information as one of the tools to promote political agendas ultimately based on values.<ref name="jussila2012">Jussila H. Päätöksenteon tukena vai hyllyssä pölyttymässä? Sosiaalipoliittisen tutkimustiedon käyttö eduskuntatyössä. [Supporting decision making or sitting on a shelf? The use of sociopolitical research information in the Finnish Parliament.] Helsinki: Sosiaali- ja terveysturvan tutkimuksia 121; 2012. http://hdl.handle.net/10138/35919. Accessed 1 Feb 2020. (in Finnish)</ref> Therefore, a successful method should provide functionalities for each of the key groups.  


In the 1970's, the focus was on scientific knowledge and an idea that political ambitions should be separated from objective assessments especially in the US. Since the 1980's, risk assessment has been a key method to assess human risks of environmental and occupational chemicals<ref>National Research Council. Risk Assessment in the Federal Government: Managing the Process. Washington DC: National Academy Press; 1983.</ref>. National Research Council specifically developed a process that could be used by all federal US agencies. Although it was generic in this sense, it typically focussed on a single chemical at a time and thus provided guidance for administrative permissions but less so to complex policy issues.  
In the late 1970's, the focus was on scientific knowledge and an idea that political ambitions should be separated from objective assessments especially in the US. Since the 1980's, risk assessment has been a key method to assess human risks of environmental and occupational chemicals<ref>National Research Council. Risk Assessment in the Federal Government: Managing the Process. Washington DC: National Academy Press; 1983.</ref>. National Research Council specifically developed a process that could be used by all federal US agencies. The report emphasised the importance of scientific knowledge in decision making and scientific methods, such as critical use of data, as integral parts of assessments. Criticism based on observations and rationality is a central idea in the scientific method<ref name="popper1963">Popper K. Conjectures and Refutations: The Growth of Scientific Knowledge, 1963, ISBN 0-415-04318-2</ref>. The report also clarified the use of causality: the purpose of an assessment is to clarify and quantify a causal path where an exposure to a chemical or other agent leads to a health risk via pathological changes described by the dose-response function of that chemical.


This shortcoming was tackled in another report that acknowledged this complexity and offered deliberation with stakeholders as a solution, in addition to scientific analysis<ref name="nrc1996">National Research Council. Understanding risk. Informing decisions in a democratic society. Washington DC: National Academy Press; 1996.</ref>. However, despite these intentions, practical assessments have found it difficult to successfully perform deliberation on a routine basis<ref>Pohjola MV, Leino O, Kollanus V, Tuomisto JT, Gunnlaugsdóttir H, Holm F, Kalogeras N, Luteijn JM, Magnússon SH, Odekerken G, Tijhuis MJ, Ueland O, White BC, Verhagen H. State of the art in benefit-risk analysis: Environmental health. Food Chem Toxicol. 2012;50:40-55.</ref>. On the contrary, citizens often complain that even if they have been formally heard during a process, they have not been listend to and their concerns have not affected decisions made.
The approach was designed for single chemicals rather than for complex societal issues. This shortcoming was approached in another report that acknowledged this complexity and offered deliberation with stakeholders as a solution, in addition to scientific analysis<ref name="nrc1996">National Research Council. Understanding risk. Informing decisions in a democratic society. Washington DC: National Academy Press; 1996.</ref>. An idea was to explicate the intentions of the decision maker but also those of the public. Also, mutual learning about the topic was seen important. There are models for describing facts and values in a coherent dual system<ref>von Winterfeldt D. Bridging the gap between science and decision making. PNAS 2013;110:3:14055-14061. http://www.pnas.org/content/110/Supplement_3/14055.full</ref>. However, practical assessments have found it difficult to successfully perform deliberation on a routine basis<ref name="pohjola2012">Pohjola MV, Leino O, Kollanus V, Tuomisto JT, Gunnlaugsdóttir H, Holm F, Kalogeras N, Luteijn JM, Magnússon SH, Odekerken G, Tijhuis MJ, Ueland O, White BC, Verhagen H. State of the art in benefit-risk analysis: Environmental health. Food Chem Toxicol. 2012;50:40-55.</ref>. Indeed, citizens often complain that even if they have been formally listened to during a process, the processes need more openness, as their concerns have not contributed to the decisions made<ref>Doelle M, Sinclair JA. (2006) Time for a new approach to public participation in EA: Promoting cooperation and consensus for sustainability. Environmental Impact Assessment Review 26: 2: 185-205 https://doi.org/10.1016/j.eiar.2005.07.013.</ref>.


In the early 2000's, several important books and articles were published about mass collaboration<ref>Tapscott D, Williams AD. Wikinomics. How mass collaboration changes everything. USA: Portfolio; 2006. ISBN 1591841380</ref>, wisdom of crowds<ref>Surowiecki J. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. USA: Doubleday; Anchor; 2004. ISBN 9780385503860</ref>, crowdsouring in the government<ref name="noveck2010">Noveck, BS. Wiki Government - How Technology Can Make Government Better, Democracy Stronger, and Citizens More Powerful. Brookings Institution Press; 2010. ISBN 9780815702757</ref>, and co-creation<ref>Co-creation: Mauser W, Klepper G, Rice M, Schmalzbauer BS, Hackmann H, Leemans R, Current HM. Transdisciplinary global change research: the co-creation of knowledge for sustainability. Opinion in Environmental Sustainability 2013;5:420–431; doi:10.1016/j.cosust.2013.07.001</ref>. A common idea of the authors was that voluntary, self-organised groups had knowledge and capabilities that could be much more effectively harnessed in the society than what was happening at the time.  
Western societies have shown a megatrend of increasing openness in many sectors, including decision-making and research. Openness of scientific publishing is increasing and many research funders also demand publishing of data, and research societies are starting to see the publishing of data as a scientific merit in itself<ref name="tsv2020"/>. It has been widely acknowledged that the current mainstream of proprietary (as contrast to open access) scientific publishing is a hindrance to spreading ideas and ultimately science<ref>Eysenbach G. Citation Advantage of Open Access Articles. PLoS Biol 2006: 4; e157. doi: 10.1371/journal.pbio.0040157</ref>. Also governments have been active in opening data and statistics to wide use (data.gov.uk). Governance practices have been developed towards openness and inclusiveness, promoted by international initiatives such as Open Government Partnership (www.opengovpartnership.org).


These ideas were seen as potentially important for environmental health assessment in THL (at that time National Public Health Institute, KTL), and they were adopted in the work of the Centre of Excellence for Environmental Health Risk Analysis (2002-2007). A technical milestone was achieved in January 2006 when we launched our own wiki site ''Opasnet'' for environmental health assessments, inspired by the success of Wikipedia. This enabled the intertwining of both theoretical and practical work to improve assessment methods and test openness and co-creation as elementary parts of the previously closed expert work<ref>Pohjola M. Assessments are to change the world. Prerequisites for effective environmental health assessment. Helsinki: National Institute for Health and Welfare Research 105; 2013. http://urn.fi/URN:ISBN:978-952-245-883-4</ref>. This research soon lead to a summary report about the new methods and tools developed to facilitate assessments<ref name="ora2007">Tuomisto JT, Pohjola M, editors. Open Risk Assessment. A new way of providing scientific information for decision-making. Helsinki: Publications of the National Public Health Institute B18; 2007. http://urn.fi/URN:ISBN:978-951-740-736-6.</ref>.
As an extreme example, a successful hedge fund Bridgewater Associates implements radical openness and continuous criticism of all ideas presented by its workers rather than letting organisational status determine who is heard<ref name="dalio2017">Dalio R. Principles: Life and work. New York: Simon & Shuster; 2017. ISBN 9781501124020</ref>. In a sense, they are implementing the scientific method in much more rigorous way than what is typically done in science.  


The main ideas of our approach was to facilitate both the scientific work about policy-related facts and policy support about finding out what could and should be done and why. We identified three critical needs for development in the scientific enterprise, namely a) data sharing, b) criticism, and c) common platform for discussion, data collection, and modelling. For decision making, we identified needs to explicate better a) decision makers' values and objectives, b) connections between scientific and other relevant issues, and c) disagreements between individuals and their potential resolutions. Decision making was essentially seen as an art of balancing knowledge with values in a coherent and explicit way.
In the early 2000's, several important books and articles were published about mass collaboration<ref>Tapscott D, Williams AD. Wikinomics. How mass collaboration changes everything. USA: Portfolio; 2006. ISBN 1591841380</ref>, wisdom of crowds<ref>Surowiecki J. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. USA: Doubleday; Anchor; 2004. ISBN 9780385503860</ref>, crowdsourcing in the government<ref name="noveck2010">Noveck, BS. Wiki Government - How Technology Can Make Government Better, Democracy Stronger, and Citizens More Powerful. Brookings Institution Press; 2010. ISBN 9780815702757</ref>, and co-creation<ref name="mauser2013">Mauser W, Klepper G, Rice M, Schmalzbauer BS, Hackmann H, Leemans R, Current HM. Transdisciplinary global change research: the co-creation of knowledge for sustainability. Opinion in Environmental Sustainability 2013;5:420–431; doi:10.1016/j.cosust.2013.07.001</ref>. A common idea of the authors was that voluntary, self-organised groups had knowledge and capabilities that could be much more effectively harnessed in the society than what was happening at the time. Large collaborative projects have shown that in many cases, they are very effective ways to produce high-quality information, as long as quality control systems are functional. In software development, Linux operating system, Git software, and Github platform are examples of this. Also Wikipedia, the largest and most used encyclopedia in the world, has demonstrated that self-organised groups can indeed produce high-quality content<ref>Giles J. Internet encyclopaedias go head to head. Nature 2005;438:900–901 doi:10.1038/438900a</ref>.


In this article, we will give the first comprehensive, peer-reviewed description about the methodology and tools of open policy practice. Case studies have been published along the way, and the key methods have been described in each article. Also, all methods and tools have been developed online and the full material have been available for interested readers since each piece was first written. However, there has not been a systematic description since the 2007 report<ref name="ora2007"/>, and a lot of development has taken place since.
The five principles of collaboration, openness, causality, criticism, and intentionality (Table 1) were seen as potentially important for environmental health assessment in Finnish Institute for Health and Welfare (THL; at that time National Public Health Institute, KTL), and they were adopted in the methodological decision support work of the Centre of Excellence for Environmental Health Risk Analysis (2002-2007). Open policy practice has been developed during the last twenty years especially to improve environmental health assessments<sup>a</sup>. Developers have come from several countries in projects mostly funded by EU and the Academy of Finland (see Funding and Acknowledgements).


We will also take a step back and critically evaluate the methods used during the last 16 years since the start of the Centre of Excellence.
Materials for the development, testing, and evaluation of open policy practice were collected from several sources.


Finally, we will discuss some of the main lessons learned and give guidance for further work to improve the use of scientific information in societal decision making.
Research projects about assessing environmental health risks were an important platform to develop, test, and implement assessment methods and policy practices. Important projects are listed in Funding. Especially the sixth framework programme of EU and its INTARESE and HEIMTSA projects (2005-2011) enabled active international collaboration around environmental health assessment methods.  


== Methods ==
Assessment cases were performed in research projects and in support for national or municipality decision making in Finland. Methods and tools were developed side by side with practical assessment work (Appendix S1).


=== Extended causal diagrams ===
Literature searches were performed to scientific and policy literature and websites. Concepts and methods similar to those in open policy practice were sought. Data was searched from Pubmed, Web of Knowledge, Google Scholar, and the Internet. In addition, a snowball method was used: found documents were used to screen their references and authors' other publications to identify new publications. Articles that describe large literature searches and their results include<ref name="pohjola2013"/><ref name="pohjola2012"/><ref name="pohjola2013b"/><ref name="pohjola2011"/>.


[[image:Bioaccumulation of dioxin.png|thumb|500px|Figure 1. Extended causal diagram about dioxins, Baltic fish, and health as decribed in the BONUS GOHERR project<ref name="goherr2018"/>. Decisions are shown as red hexagons, decision makers and stakeholders as yellow rectangles, decision objectives as yellow diamonds, and substantive issues as blue round-cornered rectangles. The relations are written on the diagram as predicates of sentences where the subject is at the tail of the arrow and the object is at the tip of the arrow. For other extended causal diagrams about the same topic, see Appendix 1.]]
Open risk assessment workshops were organised as spin-offs of several of these projects for international doctoral students in 2007, 2008, and 2009. The workshops offered a place to share, discuss, and criticise ideas.
Policy issues are complex, which makes it difficult to understand, analyse, discuss and communicate them effectively. The first innovation that we implemented to tackle this difficulty was ''extended causal diagram''<sup>b</sup>. Extended causal diagrams are based on an idea that irrespective of a decision situation at hand, core issues always include some actions based on the decision, and these actions have causal effects on some objectives. In a diagram, actions, objectives, and other issues are depicted with nodes (aka vertices), and their causal relations are depicted with arrows (aka arcs). For example, a need to reduce dioxins in human food may lead to a decision to clean up emissions from waste incineration (Figure 1.). Reduced emissions improve air quality and dioxin deposition into the Baltic Sea, which has a favourable effect on concentrations in the Baltic herring and thus human food and health, which is an ultimate objective (among others).


Causal modelling as such is an old idea, and there are various methods developed for it, both qualitative and quantitative. However, the additional ideas with extended causal diagrams were that a) they can effectively be used in communication especially with complex cases, and b) also all non-causal issues can and should be linked to the causal core in some way, if they are relevant to the decision. In other words, a participant in a policy discussion should be able to make a reasonable connection between what they are just saying and some node in an extended causal diagram developed for that policy issue.
A master's course ''Decision Analysis and Risk Management'' (6 credit points) was organised by the University of Eastern Finland (previously University of Kuopio) in 2011, 2013, 2015, and 2017. The course taught open policy practice and tested its methods in course work.


The first implementations of extended causal diagrams were about toxicology of dioxins<ref name="tuomisto1999"/> and restoration of a closed asbestos mine area<ref name="paakkila1999"/><sup>c</sup>. In the early cases, the main purpose was to give structure to discussion and assessment rather than to be a backbone for quantitative models. In later implementations, such as in the composite traffic assessment<ref name="tuomisto2005"/> or BONUS GOHERR project<ref name="goherr2018"/>, diagrams have been used in both purposes.
Finally, general expertise and understanding was developed during practical experiences and long-term follow-up of international and national politics.


For more examples and the description of the current notation, see Appendix 1.
The development and selection of methods and tools to open policy practice has roughly followed this iterative pattern, where an idea is improved during each iteration, or sometimes rejected.
* A need is identified for improving knowledge practices of a decision process or scientific policy support. This need typically arises from scientific literature, project work or news media.
* A solution idea is developed in aim to tackle the need.
* It is checked whether the idea fits logically in the current framework of open policy practice.
* The idea is discussed in a project team to develop it further and gain acceptance.
* A practical solution (web tool, checklist or similar) is produced.
* The solution is piloted in an assessment or policy process.
* The solution is added into the recommended set of methods of open policy practice.
* The method is updated based on practical experience.


=== Knowledge crystals ===
Development of open policy practice started with focus on opening the expert work in policy assessments. In 2007, this line of research produced a summary report about the new methods and tools developed to facilitate assessments<ref name="ora2007">Tuomisto JT, Pohjola M, editors. Open Risk Assessment. A new way of providing scientific information for decision-making. Helsinki: Publications of the National Public Health Institute B18; 2007. http://urn.fi/URN:ISBN:978-951-740-736-6.</ref>. Later, a wider question about ''open policy practice''<sup>b</sup> emerged: how to organise evidence-informed decision making in a situation where the five principles are used as the starting point? The question was challenging, especially as it was understood that societal decision making is rarely a single event, but often consists of several interlinked decisions at different time points and sometimes by several decision-making bodies. Therefore, it was seen more as a leadership guidance rather than advice about a single decision.


Although an extended causal diagram provides a method to illustrate a complex decision situation, it offers little help in describing quantitative nuances within the nodes or arrows, such as functional or probabilistic relations or estimates. There are tools with such functionalities, e.g. Hugin (Hugin Expert A/S, Aalborg, Denmark) for Bayesian belief networks and Analytica® (Lumina Decision Systems Inc, Los Gatos, CA, USA) for Monte Carlo simulation. However, commercial tools are typically designed for a single desktop user rather than for open co-creation. In addition, they have limited possibilities for adding non-causal nodes and links or free-format discussions about the topics.
This article gives the first comprehensive, peer-reviewed description about the current methods and tools of open policy practice since the 2007 report<ref name="ora2007"/>. Case studies have been published along the way, and the key methods have been described in different articles. Also, all methods and tools have been developed online and the full material has been available at Opasnet (http://en.opasnet.org) for interested readers since each piece was first written.


There was a clear need for theoretical and practical development in this area, so we combined three open source softwares into a single web-workspace named ''Opasnet'' for discussions, modelling and data storage of scientific and policy issues: Mediawiki, R statistical software, and MongoDB database. The workspace will be described in more detail in the next section and appendices 2–4. Here we will first look at how the information is structured in this system.
The purpose of this article is to critically evaluate the performance of open policy practice. Does open policy practice have the properties of good policy support? And does it enable policy support according to the five principles in Table 1?


The second major innovation was the concept of a ''knowledge crystal''. Its purpose is to offer a versatile information structure for a node in an extended causal diagram. It should be able to handle any topic and to systematically describe all causal and non-causal relations to other nodes, whether they are quantitative or qualitative. It should contain mathematics, discussions, illustrations, or other information as necessary. Also, it should handle both facts and values, and withstand misconceptions and fuzzy thinking as well. Its main structure should be universal and understandable for both a human and a computer. It should be manageable using scientific practices, notably criticism and openness, i.e. in a way that anyone can read and contribute to its content. And finally, it should be easy enough for an interested non-expert to find it online and to understand and use its main message.
{| {{prettytable}}
 
|+'''Table 1. Principles of open policy practice. (COCCI principles)
After some experiments, we identified a few critical features a knowledge crystal should have to fulfil its objectives. First, a knowledge crystal is a web page with a permanent identifier or URL. Second, it has an explicit topic, which is described in the format of a research ''question''. Importantly, the topic does not change over time (in practice, adjustments to the wording are allowed especially if the knowledge crystal is not yet established and used in several different assessments). This makes it possible for a user to come later to the same page and find an up-to-date version of the same topic.
!Principle || Description
 
|----
Third, the purpose of a page is to give an informed answer to the question presented. The answer is expected to change as new information becomes available, and anyone is allowed to bring in new relevant information as long as certain rules of co-creation are followed. In a sense, the answer of a knowledge crystal is never final but it is always usable.
| Collaboration || Knowledge work is performed together in aim to produce shared information.
 
|----
A standardised structure is especially relevant for the answer of a knowledge crystal, because it enables its direct use in assessment models or internet applications. So even though the content is updated as knowledge increases, the answer remains in the same, computer-readable format. So far, such interpretations of particular research topics have been very rare: open data contains little or no interpretation, and scientific reviews or articles are not machine-readable (at least not until artificial intelligence develops further).
| Openness || All work and all information is openly available to anyone interested for reading and contributing all the time. If there are exceptions, these must be publicly justified.  
 
|----
Fourth, an answer is based on information, reasoning, and discussion presented on the page under the heading ''rationale''. The purpose of rationale is to contain anything that is needed to convince a critical rational reader about the validity of the answer. It is also the place for new information and discussions that may change the answer.
| Causality || The focus is on understanding and describing the causal relations between the decision options and the intended outcomes. The aim is to predict what impacts will likely occur if a particular decision option is chosen.
 
|----
{|{{prettytable}}
| Criticism || All information presented can be criticised based on relevance and accordance to observations. The aim is to reject ideas, hypotheses — and ultimately decision options — that do not hold against critique.  
|+'''Table 1. The ''attributes'' of a knowledge crystal.
|----
! Attribute
| Intentionality || The decision makers explicate their objectives and decision options under consideration. Also values of other participants or stakeholders are documented and considered.  
! Description
|-----
| '''Name'''
| An identifier for the knowledge crystal. Each Opasnet page has two identifiers: the name of the page (e.g. Mercury concentrations in fish in Finland; for humans) and the page identifier (e.g. Op_en4004; for computers).
|-----
| '''Question'''
| Gives the research question that is to be answered. It defines the scope of the knowledge crystal. When possible, the question should be defined in a way that it has relevance in many different situations, i.e. makes the page reusable. (For example, a page about mercury concentrations can be used in several assessments related to fish consumption.)
|-----
| '''Answer'''
| Presents an understandable and useful answer to the question. It is the current best synthesis of all available data. Typically it has a descriptive easy-to-read summary and a detailed quantitative ''result'' published as open data. An answer may contain several competing hypotheses, if they all hold against scientific criticism. This way, it may include an accurate description of the uncertainty of the answer.
|-----
| '''Rationale'''
| Contains any information that is necessary to convince a critical rational reader that the answer is credible and usable. It presents the reader the information required to derive the answer and explains how it is formed. It may have different sub-attributes depending on the page type, some examples are listed below. Rationale may also contain lengthy discussions about relevant details.
* '''Data''' tell about direct observations (or expert judgements) about the topic.
* '''Dependencies''' tell what we know about how upstream knowledge crystals (i.e. causal parents) affect the answer. Dependencies may describe functional or probabilistic relationships.
* '''Calculations''' are an operationalisation of how to calculate or derive the answer. It uses algebra, computer code, or other explicit methods if possible. Typically it is [[R]] code that produces and stores the necessary parts of a model to Opasnet.
|----
|----
| Other
| In addition to attributes, it is practical to have clarifying subheadings on a knowledge crystal page. These include: See also, Keywords, References, Related files
|}
|}


It is useful to compare a knowledge crystal to a scientific article, which is organised around a single dataset or an analysis and is expected to stay permanently unchanged after publication. Further, articles offer little room for deliberation about the interpretation or meaning of the results after a manuscript is submitted: reviewer comments are often not published, and further discussion about an article is rare and mainly occurs only if serious problems are found. Indeed, the current scientific publishing system is poor in correcting errors via deliberation<ref>Allison DB, Brown AW, George BJ, Kaiser KA. Reproducibility: A tragedy of errors. Nature 2016;530:27–29. doi:10.1038/530027a</ref>.
==Open policy practice==


In contrast, knowledge crystal is designed to support continuous discussion about the science (or values, depending on the topic) backing up conclusions, thus hopefully leading to improved understanding of the topic.  
[[image:Information flow within open policy practice.svg|thumb|400px|Figure 1. Information flows in open policy practice. Open assessments and web-workspaces have an important role as information hubs. They collect relevant information for particular decision processes and organise and synthesise it into useful formats especially for decision makers but also for anyone. The information hub works more effectively if all stakeholders contribute to one place, or alternatively facilitators collect their contributions there.]]
In this section, open policy practice is described in its current state. First, an overview is given, and then each part is described in more detail.


This process is similar to that in Wikipedia, but the information structure is different, as Wikipedia articles describe issues rather than answer specific questions. Another difference is that Wikipedia relies on established sources such as textbooks rather than interprets original results. There is a clear reason for this difference: knowledge crystals are about some details of a decision or an assessment, and there is no established textbook knowledge about most of the information needed.
''Open policy practice'' is a set of methods to support and perform societal decision making in an open society, and it is the overarching concept covering all methods, tools, practices, and terms presented in this article<ref>Tuomisto JT, Pohjola M, Pohjola P. Avoin päätöksentekokäytäntö voisi parantaa tiedon hyödyntämistä. [Open policy practice could improve knowledge use.] Yhteiskuntapolitiikka 2014;1:66-75. http://urn.fi/URN:NBN:fi-fe2014031821621 (in Finnish). Accessed 1 Feb 2020.</ref>. Its theoretical foundation is on the graph theory<ref name="bondy2008">Bondy, J. A.; Murty, U. S. R. (2008). Graph Theory. Springer. ISBN 978-1-84628-969-9.</ref> and systematic information structures. Open policy practice especially focuses on promoting the openness, flow and use of information in decision processes (Figure 1). Its purpose is to give practical guidance for the whole decision process from ideating possible actions to assessing impacts, deciding between options, implementing decisions, and finally to evaluating outcomes. It aims to be applicable to all kinds of societal decision situations in any administrative area or discipline. An ambitious objective of open policy practice is to be so effective that a citizen can observe improvements in decisions and outcomes, and so reliable that a citizen is reluctant to believe claims that are in contradiction with shared understanding produced by open policy practice.


There are different kinds of knowledge crystals for different uses. ''Variables'' contain substantive topics such as emissions of a pollutant, food consumption or other behaviour of an individual, or disease burden in a population (for examples, see Figure 1.) ''Assessments'' describe the information needs of particular decision situations and work processes designed to answer those needs. They also may describe whole models (consisting of variables) for simulating impacts of a decision. ''Methods'' describe specific procedures to organise or analyse information. The question of a method typically starts with "How to..." For a list of all knowledge crystal types used in Opasnet web-workspace, see Appendix 1.
Open policy practice is based on the five principles presented in Table 1. The principles can be met if the purpose of policy support is set to produce '''shared understanding''' (a situation where different facts, values, and disagreements related to a decision situation are understood and documented). The description of shared understanding (and consequently improved actions) is thus the main output of open policy practice (see also Figure 2). It is a product that guides the decision and is the basis for evaluation of outcomes.


Knowledge crystals are designed to be modular and reusable. This is important for the efficiency of the work. Variables are used in several assessments where possible, and methods are used to standardise and facilitate the work. For this reason, all software in Opasnet are widely used open source solutions. An R package ''OpasnetUtils'' was developed to contain the most important methods, functions, and information structures needed to use knowledge crystals in modelling (for details, see Appendix 2).
This guidance is formalised as '''evaluation and management''' of the work and knowledge content during a decision process. It defines the criteria against which the knowledge process needs to be evaluated and managed. It contains methods to look at what is being done, whether the work is producing the intended knowledge and outputs, and what needs to be changed. Each task is evaluated before, during, and after the actual execution, and the work is iteratively managed based on this.


=== Opasnet web-workspace ===
The '''execution''' of a decision process is about collecting, organising and synthesising scientific knowledge and values in order to achieve objectives by informing the decision maker and stakeholders. A key part is open assessment that typically estimates the impacts of the planned decision options. Assessment and knowledge production is also performed during the implementation and evaluation steps. Execution also contains the acts of making and implementing decisions; however, they are so case-specific processes depending on the topic, decision maker, and the societal context that they are not discussed in this article.


In 2006, we set an objective to develop an assessment system that has a solid theoretial basis and coherent practices that are supported by functional tools. We assumed that the different parts need to be developed side by side so that theories are tested with practical work and assessment practices and tools are evaluated against the theoretical framework. This lead our team to simultaneously develop information structures, assessment methods, and evaluation criteria; perform assessment case studies; and build software libraries and online tools.
[[image:Open policy practice.png|thumb|400px|Figure 2. The three parts of open policy practice. The timeline goes roughly from left to right, but all work should be seen as iterative processes. Shared understanding as the main output is in the middle, expert-driven information production is a part of execution. Evaluation and management gives guidance to the execution.]]


The first part of our web-workspace, namely a wiki, was launched in January, 2006. Its name ''Opasnet'' is a short version of ''Open Assessors' Network''. The purpose was to test and learn co-creation among environmental health experts and also start opening the assessment process to interested stakeholders. There were several reasons to choose Mediawiki platform. First, it is the same as used by Wikipedia, so a large group of people had already tested and developed many novel practices and we could directly learn from them. Second, open code and wide use elsewhere would help our practices spread to new communities because the installation and learning costs were presumably low. Third, Mediawiki has necessary features to implement good research practices, such as talk pages to clearly separate content and discussions about the content, and automatic and full version control. And fourth, the maintenance and development of the software itself seemed to be certain for several years.
=== Shared understanding ===


We launched several wiki instances for different projects, as participing researchers didn't want to write to a website that was visible to other projects as well. However, this caused extra maintenance burden and confusion with no real added value, so we soon started to discourage against this practice. Instead, we moved to a system with three wiki instances. English Opasnet (en.opasnet.org) contained all international projects and most scientific information. Finnish Opasnet (fi.opasnet.org) contained mostly project material for Finnish projects and pages targeted for Finnish audiences. Heande (short for Health, the Environment, and Everything) was a password-protected project wiki, which contained information that could not be published (yet) for a reason or another. We used a lot of effort to encourage researchers to write directly to open wikis, but most were hesitant to do so at the time (and many still are).
Shared understanding is a situation where all participants' views about a particular topic have been understood, described and documented well enough so that people can know what facts, opinions, reasonings, and values exist and what agreements and disagreements exist and why. Shared understanding is produced in collaboration by decision makers, experts, and stakeholders. Each group brings in their own knowledge and concerns. Shared understanding aims to reflect all the five principles of open policy practice. This creates requirements to the methods that can be used to produce shared understanding.


In the beginning, Opasnet was mainly used to document project content. All environmental health assessments were performed using commercial software, notably Analytica®. However, it was clear that the contents of all assessment models to the very detail should be published and opened up for public scrutiny. Although it was possible to upload model files for people to download and examine, this never actually happened in practice because any interested reader would have had to obtain, install, and learn the software first. So, it was necessary to switch to an open source modelling software that enabled online working with model code.
Shared understanding is always about a particular topic and produced by a particular group of participants. Depending on the participants, the results might differ, but with an increasing number of participants, it putatively approaches a shared understanding of the society as a whole. Ideally, each participant agrees that the written description correctly contains their own thinking about the topic. Participants should even be able to correctly explain what other thoughts there are and how they differ from their own. Ideally any participant can learn, understand, and explain any thought represented in the group. Importantly, there is no need to agree on things, just to agree on what the disagreements are about. Therefore, shared understanding is not the same as consensus or agreement.


The statistical software R was chosen, as it was widely used, it had object-oriented approach (thus supporting modularity) and it enabled complex modelling with fairly simple code, thanks to hundreds of packages that volunteers had written and shared to enhance the functionalities of R. There was no inherent support for R in Mediawiki, so we had to write our own interface. As a result, we could write R code directly to a wiki page, save, and run it by clicking a button. The output of the code would appear as a separate web page, or embedded on the wiki page with the code. Resulting objects could also be stored to the server and fetched later by another code. This made it possible to run complex models online without installing anything on your own computer (except a web browser). It also enabled version control and archival of both the model code and the model results.
Shared understanding has potentially several purposes that all aim to improve the quality of societal decisions. It helps people understand complex policy issues. It helps people see their own thoughts from a wider perspective and thus increase acceptance of decisions. It improves trust in decision makers; but it may also deteriorate trust if the actions of a decision maker are not understandable based on shared understanding. It dissects each difficult detail into separate discussions and then collects statements into an overview; this helps to allocate the time resources of participants efficiently to critical issues. It improves awareness of new ideas. It releases the full potential of the public to prepare, inform, and make decisions. How well these purposes have been fulfilled in practice in assessments are discussed in Results.


We also developed an R package ''OpasnetUtils'' (available from CRAN repository cran.r-project.org) to support knowledge crystals and decision support models. It has a special object type for knowledge crystals (called ''ovariable'') that implements the functionalities described above. For example, the answer can be calculated based on data or functional dependencies; it "understands" its own dependencies and is able to fetch its causal parents to the model from separate Opasnet pages; a model can be adjusted afterwards by implementing one or more decision options to relevant parts of the model, and this is done on an Opasnet page with no changes to the model code; and if input values are uncertain, it automatically propagates uncertainties through the model using Monte Carlo simulation.
'''Test of shared understanding


The modelling functionalities created a need to store data to the web-workspace. A database called ''Opasnet Base'' was created using MongoDB no-sql software<sup>d</sup>. It has several user interfaces. A user can write a table directly on a wiki page, and the content will be uploaded to the database in a structured format. This is often used to give parameter values to variables in assessment models. A benefit is that the data is located in an intuitive place, typically under the Rationale subheading on a knowledge crystal page.
''Test of shared understanding'' can be used to evaluate how well shared understanding has been achieved. In a successful case, all participants of a decision process give positive answers to the questions in Table 2. In a way, shared understanding is a metric for evaluating how well decision makers have embraced the knowledge base of the decision situation.
 
Another interface is especially used to upload large datasets (some population datasets used contain ca. 10 million rows) to Opasnet Base. It is noteworthy that each dataset must be linked to a single wiki page, which contains all the necessary descriptions and metadata about the data. All datasets are also downloadable to R for calculations irrespective of whether R is run from an Opasnet page or from user's own computer.
 
This data structure facilitates coherent practices of daily work with little or no extra effort needed to link datasets to relevant topics and document and archive them. Functionalities are deliberately organised in a way that all assessment-related work and research can be performed using the same tools.
 
=== Open assessment ===
 
''Open assessment'' is a method for performing impact assessments using extended causal diagrams, knowledge crystals, and open online assessment tools. Here we use "assessment of impacts" for ex ante consideration about what will happen if a particular decision is made, and "evaluation of outcomes" for ex post consideration about what did happen after a decision was implemented. Open assessments are typically performed before a decision is made. The focus is necessarily on expert knowledge and how to organise that, although prioritisation is only possible if the objectives and valuations of the decision maker are known.
 
As a research topic, open assessment attempts to answer this research question: "How can scientific information and value judgements be organised for improving societal decision-making in a situation where open participation is allowed?" This question was in our minds when we developed many of the ideas presented in this article. As can be seen, openness, participation, and values are taken as given premises. In the early 2000's, this was far from common practice, although these ideas had been proposed before we started to develop practices based on them<ref name="nrc1996"/>.
 
The main focus since the beginning was to think about information and information flows, rather than jurisdictions, roles, or hierarchies. So, we deliberately ignored questions like what kinds of scientific committees are needed to support relevant high-quality advice; or how expert opinions should be included in the work of e.g. a government, parliament, or municipality council. The idea was rather that if the information production process is completely open, it can include information from any committee or individual as long as the quality aspect is successfully resolved. And if all useful information related to a decision can be synthesised and made available to everyone, then any kind of decision-making body could use that information. Generic approach was chosen to be helpful irrespectice of the structure of the administration.
 
Of course, this does not mean that any kind of organisation or individual is equally prone or capable of using assessment information. It simply means that we considered it as a separate question. Having said that, there was also a thought that if a good assessment is able to produce some clear and unequivocal conclusion that the whole public can see and understand, it will become much harder for any decision maker to deviate from that conclusion.
 
'''Principles in open assessment
 
There are guidance about crowdsourced policymaking<ref>Aitamurto T, Landemore H. Five design principles for crowdsourced policymaking: Assessing the case of crowdsourced off-road traffic law in Finland. Journal of Social Media for Organizations. 2015;2:1:1-19.</ref>, and similar ideas have been utilised in open assessment. Openness, causality, knowledge crystals, and reuse are principles that have been built in the functionalities of the tools in aim to make it easy to obey them (Table 2). Drawing extended causal diagrams puts emphasis on causalities, and publishing them on open web-workspaces complies with openness automatically. Knowledge crystals promote reuse of information. Some other principles require more understanding and effort from the participants.
 
Intentionality is about making the values and objectives of decision makers visible and under scrutiny. There exist models for describing facts and values in a coherent dual system, and such methods should be encouraged<ref>von Winterfeldt D. Bridging the gap between science and decision making. PNAS 2013;110:3:14055-14061. http://www.pnas.org/content/110/Supplement_3/14055.full</ref>. However, this often requires extra effort, and also it may be tactically useful for a decision maker not to conceal all their values.
 
Criticism based on observations and rationality is a central idea in the scientific method, and therefore it is also part of open assessment. However, its implementation into the tools is difficult. For example, only in rare cases it is possible or practical to develop logical propositions that would automatically rule out a statement if another statement is found true. So, most critique is verbal or written discussion between participants and difficult to automate. Still, we have found some useful information structures for criticism.
 
Discussions can be organised according to pragma-dialectical argumentation rules<ref>Eemeren FH van, Grootendorst R. A systematic theory of argumentation: The pragma-dialectical approach. Cambridge: Cambridge University Press; 2004.</ref>, so that arguments form a hierarchical thread pointing to a main statement or statements. Attack arguments are used to invalidate arguments, and defends are used to prevent from attacks, while comments are used to clarify issues. At the moment, such hierarchical structures are built by hand based on what people say. But it has clear added value for a reader, because even a lengthy discussion can be summarised into a short statement after a resolution is found, and any thread can be individually scrutinised.
 
Grouping and respect are principles that aim to motivate and guide individuals to collaborate online. It has been found out that being part of an identifiable group clearly does both: people participate more actively and are more aware of what they can and are expected to do<ref name="noveck2010"/>. Respect includes the idea that merit based on contributions should be measured and evaluated, and respect should be given to participants based on these evaluations. Respect is needed also because systematic criticism easily affects people emotionally, even when the discussion is about substance rather than person. It is therefore important to show that all contributions and contributors are valued even when a contribution is criticised. Although these two principles have been identified as very important, they are currently only implemented in face-to-face meetings by facilitators giving direct feedback; Opasnet web-workspace does not have good functionalities for them.


{| {{prettytable}}
{| {{prettytable}}
|+'''Table 2. Principles in open assessment.
|+'''Table 2. Test of shared understanding.
!Principle || Explanation
! Question !! Who is asked?
|----
|----
| Intentionality || The decision maker explicates their objectives and decision options under consideration. All that is done aims to offer better understanding about impacts of the decision related to the objectives of the decision maker. Thus, the participation of the decision maker in the decision support process is crucial.
| Is all relevant and important information described?
|rowspan="4"|All participants of the decision processes (including knowledge gathering processes)
|----
|----
| Causality || The focus is on understanding and describing the causal relations between the decision options and the intended outcomes. The aim is to predict what impacts will occur if a particular decision option is chosen.
| Are all relevant and important value judgements described? (Those of all participants, not just decision makers.)
|----
|----
| Criticism || All information presented can be criticised based on relevance and accordance to observations. The aim is to reject ideas, hypotheses - and ultimately decision options - that do not hold against criticism. Criticism has a central role in the scientific method, and here we apply it in practical situations, because rejecting poor statements is much easier and more efficient than trying to prove statements true.
| Are the decision maker's decision criteria described?
|----
|----
| Knowledge crystals || All information is shared using a systematic structure and a common workspace where all participants can work. Knowledge crystals are used for this. The structure of an assessment and its data is based on substance rather than on persons, organisations, or processes (e.g. data is not hidden in closed institute repositories; see also causality). Objectives determine the information needs, which are then used to define research questions to be answered in the assessment. The assessment work is collaboration aiming to answer these questions in a way that holds against critique.
| Is the decision maker's rationale from the criteria to the decision described?
|----
| Openness || All work and all information is openly available to anyone interested for reading and contributing all the time. If there are exceptions, these must be publicly justified. Openness is crucial because a priori it is impossible to know who may have important information or value judgements about the topic.
|----
| Reuse || All information is produced in a format that can easily be used for other purposes by other people. Open data principles are used when possible<ref>Open Knowledge International. The Open Definition. http://opendefinition.org/. Accessed 24 Jan 2018.</ref>. For example, some formats such as PDF files are not easily reusable. Reuse is facilitated by knowledge crystals and openness.
|----
| Grouping || Facilitation methods are used to promote the participants' feeling of being an important member of a group that has a meaningful purpose.
|----
| Respect || Contributions are systematically documented and their merit evaluated so that each participant receives the respect they deserve based on their contributions.
|}
|}


'''Properties of good assessment
Everything that is done aims to offer better understanding about impacts of the decision related to the decision maker's objectives. However, conclusions may be sensitive to initial values, and ignoring stakeholders' views may cause trouble at a later stage. Therefore, other values in the society are also included in shared understanding.
 
Shared understanding may have different levels of ambition. On an easy level, shared understanding is taken as general guidance and an attitude towards other people's opinions. Main points and disagreements are summarised in writing, so that an outsider is able to understand the overall picture.


There is a need to evaluate the assessment work before, during, and after it is done<ref>Pohjola MV, Pohjola P, Tainio M, Tuomisto JT. Perspectives to Performance of Environment and Health Assessments and Models—From Outputs to Outcomes? (Review). Int. J. Environ. Res. Public Health 2013;10:2621-2642 doi:10.3390/ijerph10072621</ref>. First we take a brief look at what makes a good assessment and what criteria could be used (see Table 3)<ref name="sandström2014"/>.
On an ambitious level, the idea of documenting all opinions and their reasonings is taken literally. Participants' views are actively elicited and tested to see whether a facilitator is able to reproduce their thought processes. The objective here is to document the thinking in such a detailed way that a participant's views on the key questions of a policy can be anticipated from the description they have given. This is done by using insight networks, knowledge crystals, and other methods (see below). Written documentation with an available and usable structure is crucial, as it allows participation without being physically present. It also spreads shared understanding to decision makers and to those who were not involved in discussions.  


''Quality of content'' refers to the output of an assessment, typically a report, model or summary presentation. Its quality is obviously an important property. If the facts are plain wrong, it is more likely to misguide than lead to good decisions. But it is more than that. Informativeness and calibration describe how large the remaining uncertainties are and how close the answers probably are to the truth (compared with some golden standard). In some statistical texts, similar concepts have been called precision and accuracy, respectively, although with assessments they should be understood in a flexible rather than strictly statistical sense.<ref>Cooke RM. Experts in Uncertainty: Opinion and Subjective Probability in Science. New York: Oxford University Press; 1991.</ref> Coherence means that the answers given are to the questions asked. Although not always easy to evaluate, coherence is an important property to keep in mind because lack of it is common: politician's are experts in answering other questions than asked, and researchers tend to do the same if the research funding is not sufficient to answer the actual, hard question.
Good descriptions of shared understanding are able to quickly and easily incorporate new information or scenarios from the participants. They can be examined using different premises, i.e., a user should be able to quickly update the knowledge base, change the point of view, or reanalyse how the situation would look like with alternative valuations. Ideally, a user interface would allow the user to select input values with intuitive menus and sliders and would show impacts of changes instantly.  


''Applicability'' is a large evaluation area. It looks at properties that affect how well the assessment work can and will be applied to support decisions. It is independent of the quality of content, i.e. despite high quality, an assessment may have very poor applicability. The opposite may also be true, as sometimes faulty assessments are actively used to promote policies. However, usability typically decreases rapidly if the target audience evaluates an assessment to be of poor quality.
Shared understanding as the key objective gives guidance to the policy process in general. But it also creates requirements that can be described as quality criteria for the process and used to evaluate and manage the work.


As coherence evaluates whether the assessment question was answered, relevance asks whether a good question was asked in the first place. Understanding what the right question actually is is surprisingly hard, and often its identification requires lots of discussion and deliberation between different groups, including decision makers and experts. Typically there is always too little time available for such discussions, and online forums may potentially help in this.
=== Evaluation and management ===


Availability is more technical property and describes how easily a user can find the information when needed. A typical problem is that a potential user does not know that information exists even if it could be easily accessed. Usability may differ from user to user, depending on e.g. background knowledge, interest, or time available to learn the content.
Evaluation is about following and checking the plans and progress of the decisions and implementation. Management is about adjusting work and updating actions based on evaluation to ensure that objectives are reached. Several criteria were developed in open policy practice to evaluate and describe the decision support work. Their purpose is to help participants focus on the most important parts of open policy practice.


Acceptability is a very complex issue and most easily detectable when it fails. A common situation is that stakeholders feel that they have not been properly heard and therefore any output from an assessment process is perceived faulty. Also doubts about the credibility of the assessor fall into this category.  
Guidance exists about crowdsourced policymaking<ref>Aitamurto T, Landemore H. Five design principles for crowdsourced policymaking: Assessing the case of crowdsourced off-road traffic law in Finland. Journal of Social Media for Organizations. 2015;2:1:1-19.</ref>, and similar ideas have been utilised in open assessment.


''Efficiency'' evaluates resource use when performing an assessment. Money and time are two common measures for this. Often it is most useful to evaluate efficiency before an assessment is started. Is it realistic to produce new important information given the resources and schedule available? If more (less) resources were available, what added (lost) value would occur? Another aspect in efficiency is that if assessments are done openly, reuse of information becomes easier and the marginal cost and time of a new assessment decrease.
'''Properties of good policy support


All properties of good assessment, not just efficiency, are meant to guide planning, execution, and evaluation of the whole assessment work. If they are kept in mind always, they can improve daily work.
There is a need to evaluate an assessment work before, during, and after it is done<ref name="pohjola2013b">Pohjola MV, Pohjola P, Tainio M, Tuomisto JT. Perspectives to Performance of Environment and Health Assessments and Models—From Outputs to Outcomes? (Review). Int. J. Environ. Res. Public Health 2013;10:2621-2642 doi:10.3390/ijerph10072621</ref>. A key question is, what makes good policy support and what criteria should be used (see Table 3)<ref name="sandstrom2014">Sandström V, Tuomisto JT, Majaniemi S, Rintala T, Pohjola MV. Evaluating effectiveness of open assessments on alternative biofuel sources. Sustainability: Science, Practice & Policy 2014;10;1. doi:10.1080/15487733.2014.11908132 Assessment: http://en.opasnet.org/w/Biofuel_assessments. Accessed 1 Feb 2020.</ref>.
 
Fulfilling all these criteria is of course not a guarantee that the outcomes of a decision will be successful. But the properties listed have been found to be important determinants of the success of decision processes. In projects utilising open policy practice, poor performance of specific properties could be linked to particular problems observed. Evaluating these properties before or during a decision process could help to analyse what exactly is wrong, as problems with such properties are by then typically visible. Thus, using this evaluation scheme proactively makes it possible to manage the decision making process towards higher quality of content, applicability, and efficiency.


{|{{prettytable}}
{|{{prettytable}}
|+ '''Table 3. Properties of good assessment.
|+ '''Table 3. Properties of good policy support. Here, "assessment" can be viewed as a particular expert work producing a report about a specific question, or as a wider description of shared understanding about a whole policy process. Assessment work is done before, during, and after the actual decision.
|-----
|-----
! Category
! Category
! Property
! Description
! Description
! Question
! Guiding questions
! Related principles
|-----
|-----
| rowspan="3"| Quality of content
| Quality of content
| Informativeness
| Specificity, exactness and correctness of information. Correspondence between questions and answers.
| Specificity of information, e.g. tightness of spread for a distribution.
| How exact and specific are the ideas in the assessment? How completely does the (expected) answer address the assessment question? Are all important aspects addressed? Is there something unnecessary?
| How many possible worlds does the answer rule out? How few possible interpretations are there for the answer?
| Openness, causality, criticism
|-----
| Calibration
| Exactness or correctness of information. In practice often in comparison to some other estimate or a golden standard.
| How close is the answer to reality or real value?
|-----
| Coherence
| Correspondence between questions and answers. Also between sets of questions and answers.
| How completely does the answer address the assessment question? Is everything addressed? Is something unnecessary?
|-----
|-----
| rowspan="4"| Applicability
| rowspan="4"| Applicability
| Relevance
| ''Relevance'': Correspondence between output and its intended use.
| Correspondence between output and its intended use.
| How well does the assessment address the intended needs of the users? Is the assessment question good in relation to the purpose of the assessment?
| How well does the information provided by the assessment serve the needs of the users? Is the assessment question good?
| Collaboration, openness, criticism, intentionality
|-----
|-----
| Availability
| ''Availability'': Accessibility of the output to users in terms of e.g. time, location, extent of information, extent of users.
| Accessibility of the output to users in terms of e.g. time, location, extent of information, extent of users.
| Is the information provided by the assessment available when, where and to whom is needed?
| Is the information provided by the assessment available when, where and to whom is needed?
| Openness
|-----
|-----
| Usability
| ''Usability'': Potential of the information in the output to generate understanding among its user(s) about the topic of assessment.
| Potential of the information in the output to trigger understanding in its users about what it describes.
| Are the intended users able to understand what the assessment is about? Is the assessment useful for them?
| Can the users perceive and internalise the information provided by the assessment? Does users' understanding increase about the assessed issue?
| Collaboration, openness, causality, intentionality
|-----
| Acceptability
| Potential of the output being accepted by its users. Fundamentally a matter of its making and delivery, not its information content.
| Is the assessment result (output), and the way it is obtained and delivered for use, perceived as acceptable by the users?
|-----
|-----
| rowspan="2"| Efficiency
| ''Acceptability'': Potential of the output being accepted by its users. Fundamentally a matter of its making and delivery, not its information content.  
| Intra-assessment efficiency
| Is the assessment (both its expected results and the way the assessment is planned to be made) acceptable to the intended users?
| Resource expenditure of producing the assessment output.
| Collaboration, openness, criticism, intentionality
| How much effort is spent in the making of an assessment?
|-----
|-----
| Inter-assessment efficiency
| Efficiency
| Resource expenditure of producing assessment outputs in a series of assessments.
| Resource expenditure of producing the assessment output either in one assessment or in a series of assessments.  
| If another (somewhat similar) assessment was made, how much (less) effort would be needed?
| How much effort is needed for making the assessment? Is it worth spending the effort, considering the expected results and their applicability for the intended users? Are the assessment results useful also in some other use?
| Collaboration, openness
|}
|}


=== Open policy practice ===
''Quality of content'' refers to the output of an assessment, typically a report, model or summary presentation. Its quality is obviously an important property. If the facts are plain wrong, it is more likely to misguide than lead to good decisions. Specificity, exactness, and correctness describe how large the remaining uncertainties are and how close the answers probably are to the truth (compared to some golden standard). In some statistical texts, similar concepts have been called precision and accuracy, although with decision support they should be understood in a flexible rather than strictly statistical sense.<ref>Cooke RM. Experts in Uncertainty: Opinion and Subjective Probability in Science. New York: Oxford University Press; 1991.</ref> Coherence means that the answers given are those to the questions asked.
 
''Applicability'' is an important aspect of evaluation. It looks at properties that affect how well the decision support can and will be applied. It is independent of the quality of content, i.e. despite high quality, an assessment may have very poor applicability. The opposite may also be true, as sometimes faulty assessments are actively used to promote policies. However, usability typically decreases rapidly if the target audience evaluates an assessment to be of poor quality.
 
Relevance asks whether a good question was asked to support decisions. Identification of good questions requires lots of deliberation between different groups, including decision makers and experts, and online forums may potentially help in this.
 
Availability is a more technical property and describes how easily a user can find the information when needed. A typical problem is that a potential user does not know that a piece of information exists even if it could be easily accessed.


[[image:Information flow within open policy practice.png|thumb|400px|Figure 2. Information flow in open policy practice. Open assessments and web-workspaces have an important role as information hubs. They collect relevant information for particular decision processes and organise and synthesise it into useful formats for especially decision makers but also for anyone. The information hub works more effectively if all stakeholders contribute to one place, or alternatively facilitators collect their contributions.]]
Usability may differ from user to user, depending on e.g. background knowledge, interest, or time available to learn the content.
During Intarese project (2005-2011), it became more and more clear to us based on literature and own practical experience that assessments themselves were not enough to convey the information to decision processes. The scientific and political realms are based on different premises and objectives, and we identified a need to evaluate when the information does not flow well and what are typical problems with it. So, new theoretical work was done on decision processes, roles of assessments and information in them, and guidance for participants (Figure 2).


''Open policy practice'' is a method to support societal decision making in an open society, and it is the overarching concept covering all methods, tools, practices, and terms presented in this article<ref>Tuomisto JT, Pohjola M, Pohjola P. Avoin päätöksentekokäytäntö voisi parantaa tiedon hyödyntämistä. [Open policy practice could improve knowledge use.] Yhteiskuntapolitiikka 2014;1:66-75. http://urn.fi/URN:NBN:fi-fe2014031821621 (in Finnish)</ref>. One part of open policy practice is open assessment, which focuses on producing relevant information for decision making. Open policy practice is a larger concept and it is especially focused on promoting the use of information in the decision making process. It gives practical guidance for the whole decision process from initiation to decision support to actual decision making to implementation and finally to evaluation of outcomes. Our aim was that it is applicable to all kinds of societal decision situations in any administrative area or discipline.
Acceptability is a very complex issue and most easily detectable when it fails. A common situation is that stakeholders feel that they have not been properly heard and therefore any output from decision support is perceived faulty. Doubts about the credibility of the assessor also fall into this category.


[[image:Open policy practice.png|thumb|400px|Figure 3. The four parts of open policy practice. The timeline goes roughly from left to right, but all work should be seen as iterative processes. Shared understanding as the main objective is in the middle, expert-driven information production is on the top as part of execution. Co-creation skills and facilitation should be located on all parts of the graph but for clarity it is located on the left.]]
''Efficiency'' evaluates resource use when performing an assessment or other decision support. Money and time are two common measures for this. Often it is most useful to evaluate efficiency before an assessment is started. Is it realistic to produce new important information given the resources and schedule available? If more/less resources were available, what value would be added/lost? Another aspect in efficiency is that if assessments are done openly, reuse of information becomes easier and the marginal cost and time of a new assessment decreases.
Open policy practice is divided into four main parts, which are brefly described here (see also Figure 3).
* '''Shared understanding''' (a structured documentation of different facts, values, and disagreements related to a decision situation) is the main objective of open policy practice. It is a product that guides the decision and also is basis for evaluation of outcomes. For more details, see the next section.
*    The '''execution''' of decision support. It is mostly about collecting, organising and synthesising scientific knowledge and values in order to inform the decision maker to help them reach their objectives. In practice, most of open assessment work happens in this part. It also contains the execution of the decision process itself and the integration of these two processes.
*    '''Evaluation and management''' of the work (of decision support and decision making). It focusses on looking at what is being done, whether the work produces the intended knowledge and helps to reach the objectives, and what needs to be changed. It continues all the way through the decision process (before, during, and after the actual execution).
*    '''Co-creation skills and facilitation''' (sometimes known as ''interactional expertise''). Information has to be collected, organised, and synthesised; and facilitators need to motivate and help people to share their information. This requires specific skills and work that are typically available neither among experts nor decision makers. It also contains specific practices and methods, such as motivating participation, facilitating discussions, clarifying and organising argumentation, moderating contents, using probabilities and expert judgement for describing uncertainties, or developing extended causal diagrams or quantitative models.


When we thought about the decision making process from planning to implementation and evaluation, we realised that the properties of good assessment (see previous section) can be easily adjusted to this wider context. Target object in this wider context is not an assessment report but a shared understanding about the decision option to be chosen and implemented. The evaluation criteria are valid in both contexts. The adjusted list of criteria is presented on Opasnet page ''Open policy practice''. We have also developed other evaluation criteria for important aspects of open policy practice; the most important are described below.
All properties of decision support, not just efficiency or quality of content, are meant to guide planning, execution, and evaluation of the whole decision support work. If they are always kept in mind, they can improve daily work.


'''Settings of assessments
'''Settings of assessments


All too often a decision making process or an assessment is launched without clear understanding, what should be done and why. An assessment may even be launched hoping that it will somehow reveal what the objectives or other important things are. ''Settings of assessments'' are a part of evaluation and management (Table 4). They try to help in explicating these things so that useful decision support can be provided<ref>Pohjola MV. Assessment of impacts to health, safety, and environment in the context of materials processing and related public policy. In: Bassim N, editor. Comprehensive Materials Processing Vol. 8. Elsevier Ltd; 2014. pp 151–162. doi:10.1016/B978-0-08-096532-1.00814-1</ref>. Also the subattributes of an assessment scope help in this:  
Sometimes, a decision process or an assessment may be missing a clear understanding of what should be done and why. An assessment may even be launched in a hope that it will somehow reveal what the objectives or other important factors are. ''Settings of assessments'' (Table 4) are used to explicate these so that useful decision support can be provided<ref name="pohjola2014">Pohjola MV. Assessment of impacts to health, safety, and environment in the context of materials processing and related public policy. In: Bassim N, editor. Comprehensive Materials Processing Vol. 8. Elsevier Ltd; 2014. pp 151–162. doi:10.1016/B978-0-08-096532-1.00814-1</ref>. Examining the sub-attributes of an assessment question can also help:  
* Question: What is the actual research question?
* Research question: the actual question of an open assessment
* Boundaries: What are the limits within which the question is to be answered?
* Boundaries: temporal, geographical, and other limits within which the question is considered
* Decisions and scenarios: What decisions and options will be assessed and what scenarios will be considered?
* Decisions and scenarios: decisions and options to assess and scenarios to consider
* Timing: What is the schedule of the assessment work?
* Timing: the schedule of the assessment work
* Participants: Who are the people who will or should contribute to the assessment?
* Participants: people who will or should contribute to the assessment
* Users and intended use: Who is going to use the final assessment report and for what purpose?
* Users and intended use: users of the final assessment report and purposes of the use


{|{{prettytable}}
{|{{prettytable}}
|+ '''Table 4. Important settings for environmental health (and other) assessments and related public policy.
|+ '''Table 4. Important settings for environmental health and other impact assessments within the context public policy making.
|----
|----
! Attribute
! Attribute
Line 293: Line 247:
|  
|  
* Which impacts are addressed in assessment?
* Which impacts are addressed in assessment?
* Which impacts are most significant?
* Which impacts are the most significant?
* Which impacts are most relevant for the intended use?
* Which impacts are the most relevant for decision making?
| Environment, health, cost, equity
| Environment, health, cost, equity
|-----
|-----
Line 300: Line 254:
|  
|  
* Which causes of impacts are recognized in assessment?
* Which causes of impacts are recognized in assessment?
* Which causes of impacts are most significant?
* Which causes of impacts are the most significant?
* Which causes of impacts are most relevant for the intended use?
* Which causes of impacts are the most relevant for decision making?
| Production, consumption, transport, heating, power production, everyday life
| Production, consumption, transport, heating, power production, everyday life
|-----
|-----
Line 327: Line 281:
|}
|}


'''Dimensions of openness
'''Interaction and openness


In open assessment, the method itself is designed to facilitate openness in all its dimensions. The ''dimensions of openness'' help to identify if and how the work deviates from the ideal of openness, so that the work can be improved in this respect (Table 5)<ref>Pohjola MV, Tuomisto JT. Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results. Environmental Health 2011;10:58 http://www.ehjournal.net/content/10/1/58.</ref>. However, it is important to notice that currently there is a large fraction of decision makers and experts who are not comfortable with open practices and openness as a key principle, and they would like to have a closed process. Dimensions of openness do not give direct tools to convince them. But it identifies issues where openness makes a difference and increases understanding about why there is a difference. This hopefully also leads to wider acceptance of openness.
In open policy practice, the method itself is designed to facilitate openness in all its dimensions. The ''dimensions of openness'' help to identify if and how the work deviates from the ideal of openness, so that the work can be improved in this respect (Table 5)<ref name="pohjola2011">Pohjola MV, Tuomisto JT. Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results. Environmental Health 2011;10:58 http://www.ehjournal.net/content/10/1/58.</ref>.  


{| {{prettytable}}
{| {{prettytable}}
|+ '''Table 5. Dimensions of openness.
|+ '''Table 5. Dimensions of openness in decision making.
! Dimension
! Dimension
! Description
! Description
|-----
|-----
| Scope of participation
| Scope of participation
| Who are allowed to participate in the process?
| Who is allowed to participate in the process?
|-----
|-----
| Access to information
| Access to information
Line 346: Line 300:
|-----
|-----
| Scope of contribution
| Scope of contribution
| To which aspects of the issue are participants invited or allowed to contribute?
| Which aspects of the issue are participants invited or allowed to contribute to?
|-----
|-----
| Impact of contribution
| Impact of contribution
| How much are participant contributions allowed to have influence on the outcomes? In other words, how much weight is given to participant contributions?
| How much are participant contributions allowed to have influence on the outcomes? How much weight is given to participant contributions?
|}
|}


Openness can also be examined based on how intensive it is and what kind of collaboration is aimed at between decision makers, experts, and stakeholders. Different approaches are described in Table 6. During the last decades, the trend has been from isolated to more open approaches, but all categories are still in use.
Openness can also be examined based on how intensive it is and what kind of collaboration between decision makers, experts, and stakeholders is aimed for<ref name="pohjola2012"/><ref>van Kerkhoff L, Lebel L. Linking knowledge and action for sustainable development. Annu. Rev. Environ. Resour. 2006. 31:445-477. doi:10.1146/annurev.energy.31.102405.170850</ref>. Different approaches are described in Table 6.  


{|{{prettytable}}
{|{{prettytable}}
|+ '''Table 6. Categories of interaction within the knowledge-policy interaction framework.
|+ '''Table 6. Categories of interaction within the knowledge-policy interaction framework.
! Category
! Category
! Explanation
! Description
|-----
|-----
| Isolated
| Isolated
| Assessment and use of assessment results are strictly separated. Results are provided to intended use, but users and stakeholders shall not interfere with making of the assessment.
| Assessment and use of assessment results are strictly separated. Results are provided for intended use, but users and stakeholders can not interfere with the making of the assessment.
|-----
|-----
| Informing
| Informing
| Assessments are designed and conducted according to specified needs of intended use. Users and limited groups of stakeholders may have a minor role in providing information to assessment, but mainly serve as recipients of assessment results.
| Assessments are designed and conducted according to specified needs of intended use. Users and limited groups of stakeholders may have a minor role in providing information to the assessment, but mainly serve as recipients of assessment results.
|-----
|-----
| Participatory
| Participatory
| Broader inclusion of participants is emphasized. Participation is, however, treated as an add-on alongside the actual processes of assessment and/or use of assessment results.
| Broader inclusion of participants is emphasized. Participation is, however, treated as an add-on alongside the actual processes of assessment and/or use of assessment results.
|-----
|-----
| Joint
| Joint
| Involvement of and exchange of summary-level information among multiple actors in scoping, management, communication and follow-up of assessment. On the level of assessment practice, actions by different actors in different roles (assessor, manager, stakeholder) remain separate.
| Involvement and exchange of summary-level information among multiple actors is emphasised in scoping, management, communication, and follow-up of assessment. On the level of assessment practice, actions by different actors in different roles (assessor, manager, stakeholder) remain separate.
|-----
|-----
| Shared
| Shared
| Different actors involved in assessment retain their roles and responsibilities, but engage in open collaboration upon determining assessment questions to address and finding answers to them as well as implementing them in practice.
| Different actors engage in open collaboration upon determining assessment questions, seeking answers to them, and implementing answers in practice. However, the actors involved in an assessment retain their roles and responsibilities.
|}
|}


=== Shared understanding ===
These evaluation methods guide the actual execution of a decision process.
 
=== Execution and open assessment ===
 
''Execution'' is the work during a decision process, including ideating possible actions, assessing impacts, deciding between options, implementing decisions, and evaluating outcomes. Execution is guided by information produced in evaluation and management. The focus of this article is on knowledge processes that support decisions. Therefore, methods to reach or implement a decision are not discussed here.
 
''Open assessment'' is a method for performing impact assessments using insight networks, knowledge crystals, and web-workspaces (see below). Open assessment is an important part of execution and the main knowledge production method in open policy practice.
 
An assessment aims to quantify important objectives, and especially compare differences in impacts resulting from different decision options. In an assessment, current scientific information is used to answer policy-relevant questions that inform decision makers about the impacts of different options.
 
Open assessments are typically performed before a decision is made (but e.g. the city of Helsinki has used both ex ante and ex post approaches with its climate strategy<ref name="hnh2035"/>). The focus is by necessity on expert knowledge and how to organise it, although prioritisation is only possible if the objectives and valuations of the decision maker and stakeholders are known. For a list of major open assessments, see Appendix S1.
 
As a research topic, open assessment attempts to answer this question: "How can factual information and value judgements be organised for improving societal decision making in a situation where open participation is allowed?" As can be seen, openness, participation, and values are taken as given premises. This was far from common practice but not completely new, when the first open assessments were performed in the early 2000's<ref name="nrc1996"/>.
 
Since the beginning, the main focus has been to think about information and information flows, rather than jurisdictions, political processes, or hierarchies. So, open assessment deliberately focuses on impacts and objectives rather than questions about procedures or mandates of decision support. The premise is that if the information production and dissemination are completely open, the process can be generic, and an assessment can include information from any contributor and inform any kind of decision-making body. Of course, quality control procedures and many other issues must be functional under these conditions.
 
==== Co-creation ====
 
''Co-creation'' is a method for producing open contents in collaboration, and in this context specifically knowledge production by self-organised groups. It is a discipline in itself<ref name="mauser2013"/>, and guidance about how to manage and facilitate co-creation can be found elsewhere. Here, only a few key points are raised about facilitation and structured discussion.
 
Information has to be collected, organised, and synthesised; facilitators need to motivate and help people to share their information. This requires dedicated work and skills that are not typically available among experts nor decision makers. Co-creation also contains practices and methods, such as motivating participation, facilitating discussions, clarifying and organising argumentation, moderating contents, using probabilities and expert judgement for describing uncertainties, or developing insight networks (see below) or quantitative models. Sometimes the skills needed are called interactional expertise.
 
Facilitation helps people participate and interact in co-creation processes using hearings, workshops, online questionnaires, wikis, and other tools. In addition to practical tools, facilitation implements principles that have been seen to motivate participation<ref name="noveck2010"/>. Three are worth mentioning here, because they have been shown to significantly affect the motivation to participate.
* ''Grouping'': Facilitation methods are used to promote the participants' feeling of being important members of a group that has a meaningful, shared purpose.
* ''Trust'': Facilitation builds trust among people that they can safely express their ideas and concerns, and that other members of the group support participation even if they disagree on the substance.
* ''Respect'': Contributions are systematically evaluated according to their merit so that each participant receives the respect they deserve based on their contributions as individuals or members of a group.
 
''Structured discussions'' are synthesised and reorganised discussions, where the purpose is to highlight key statements, and argumentations that lead to acceptance or rejectance of these statements. Discussions can be organised according to pragma-dialectical argumentation rules<ref>Eemeren FH van, Grootendorst R. A systematic theory of argumentation: The pragma-dialectical approach. Cambridge: Cambridge University Press; 2004.</ref> or argumentation framework<ref>Dung PM. (1995) On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence. 77 (2): 321–357. doi:10.1016/0004-3702(94)00041-X.</ref>, so that arguments form a hierarchical thread pointing to a main statement or statements. Attack arguments are used to invalidate other arguments by showing that they are either untrue or irrelevant in their context; defend arguments are used to protect from attacks; and comments are used to clarify issues. For an example, see Figure S2-5 in Appendix S2 and links thereof.
 
The discussions can be natural discussions that are reorganised afterwards or online discussions where the structure of contributions is governed by the tools used. A test environment exists for structured argumentation<ref>Hastrup T. Knowledge crystal argumentation tree. https://dev.tietokide.fi/?Q10. Web tool. Accessed 1 Feb 2020.</ref>, and Opasnet has R functions for analysing structured discussions written on wiki pages.
 
==== Insight networks ====
 
''Insight networks'' are graphs as defined by the graph theory<ref name="bondy2008"/>. In an insight network, actions, objectives, and other issues are depicted with nodes, and their causal and other relations are depicted with arrows (aka edges). An example is shown in Figure 3, which describes a potential dioxin-related decision to clean up emissions from waste incineration. The logic of such a decision can be described as a chain or network of causally dependent issues: Reduced dioxin emissions to air improve air quality and dioxin deposition into the Baltic Sea; this has a favourable effect on concentrations in the Baltic herring; this reduces human exposures to dioxins via fish; and this helps to achieve an ultimate objective of reduced health risks from dioxin. Insight networks aim to facilitate understanding, analysing, and discussing complex policy issues.
 
[[image:Bioaccumulation of dioxin.svg|thumb|500px|Figure 3. Insight network about dioxins, Baltic fish, and health as described in the BONUS GOHERR project<ref name="goherr2020">Tuomisto JT, Asikainen A, Meriläinen P et Haapasaari P. Health effects of nutrients and environmental pollutants in Baltic herring and salmon: a quantitative benefit-risk assessment. BMC Public Health 20, 64 (2020). https://doi.org/10.1186/s12889-019-8094-1 Assessment: http://en.opasnet.org/w/Goherr_assessment, data archive: https://osf.io/brxpt/. Accessed 1 Feb 2020.</ref>. Decisions are shown as red rectangles, decision makers and stakeholders as yellow hexagons, decision objectives as yellow diamonds, and substantive issues as blue nodes. The relations are written on the diagram as predicates of sentences where the subject is at the tail of the arrow and the object is at the tip of the arrow. For other insight networks, see Appendix S2.]]
Causal modelling and causal graphs as such are old ideas, and there are various methods developed for them, both qualitative and quantitative. However, the additional ideas with insight networks were that a) also all non-causal issues can and should be linked to the causal core in some way, if they are relevant to the decision, and therefore b) they can be effectively used in clarifying one's ideas, contributing, and then communicating a whole decision situation rather than just the causal core. In other words, a participant in a policy discussion should be able to make a reasonable connection between what they are saying and some node in an insight network developed for that policy issue. If they are not able to make such a link, their point is probably irrelevant.
 
The first implementations of insight networks were about toxicology of dioxins<ref name="tuomisto1999">Tuomisto JT. TCDD: a challenge to mechanistic toxicology [Dissertation]. Kuopio: National Public Health Institute A7; 1999.</ref> and restoration of a closed asbestos mine area<ref name="paakkila1999">Tuomisto JT, Pekkanen J, Alm S, Kurttio P, Venäläinen R, Juuti S et al. Deliberation process by an explicit factor-effect-value network (Pyrkilo): Paakkila asbestos mine case, Finland. Epidemiol 1999;10(4):S114.</ref><sup>c</sup>. In the early cases, the main purpose was to give structure to discussion about and examination of an issue rather than to be a backbone for quantitative models. In later implementations, such as in the composite traffic assessment<ref name="tuomisto2005">Tuomisto JT; Tainio M. An economic way of reducing health, environmental, and other pressures of urban traffic: a decision analysis on trip aggregation. BMC PUBLIC HEALTH 2005;5:123. http://biomedcentral.com/1471-2458/5/123/abstract Assessment: http://en.opasnet.org/w/Cost-benefit_assessment_on_composite_traffic_in_Helsinki. Accessed 1 Feb 2020.</ref> or BONUS GOHERR project<ref name="goherr2020"/>, diagrams have been used for both purposes. Most open assessments discussed later (and listed in Appendix S1) have used insight networks to structure and illustrate their content.
 
==== Knowledge crystals ====
 
''Knowledge crystals'' are web pages where specific research ''questions'' are collaboratively ''answered'' by producing ''rationale'' with any data, facts, values, reasoning, discussion, models, or other information that is needed to convince a critical, rational reader (Table 7).
 
Knowledge crystals have a few distinct features. The web page of a knowledge crystal has a permanent identifier or URL and an explicit topic, or question, which does not change over time. A user may come to the same page several times and find an up-to-date answer to the same topic. The answer changes as new information becomes available, and anyone is allowed to bring in new relevant information as long as certain rules of co-creation are followed. In a sense, the answer of a knowledge crystal is never final but it is always usable.
 
Knowledge crystal is a practical information structure that was designed to comply with the principles of open policy practice. Open data principles are used when possible<ref>Open Knowledge International. The Open Definition. http://opendefinition.org/. Accessed 1 Feb 2020.</ref>. For example, openness and criticism are implemented by allowing anyone to contribute but only after critical examination. Knowledge crystals differ from open data, which contains little to no interpretation, and scientific articles, which are not updated. Rationale is the place for new information and discussions, and resolutions about new information may change the answer.
 
The purpose of knowledge crystals is to offer a versatile information structure for nodes in an insight network that describes a complex policy issue. They handle research questions of any topic and describe all causal and non-causal relations from other nodes (i.e. the nodes that may affect the answer of the node under scrutiny). They contain information as necessary: text, images, mathematics, or other forms, both quantitative and qualitative. They handle facts or values depending on the questions, and withstand misconceptions and fuzzy thinking as well. Finally, they are intended to be found online by anyone interested, and their main message to be understood and used even by a non-expert.
 
{|{{prettytable}}
|+'''Table 7. The ''attributes'' of a knowledge crystal.
! Attribute
! Description
|-----
| '''Name'''
| An identifier for the knowledge crystal. Each page has a permanent, unique name and identifier or URL.
|-----
| '''Question'''
| A research question that is to be answered. It defines the scope of the knowledge crystal. Assessments have specific sub-attributes for questions (see section Settings of assessments)
|-----
| '''Answer'''
| An understandable and useful answer to the question. It is the current best synthesis of all available data. Typically it has a descriptive easy-to-read summary and a detailed quantitative ''result'' published as open data. An answer may contain several competing hypotheses, if they all hold against scientific critique. This way, it may include an accurate description of the uncertainty of the answer, often in a probabilistic way.
|-----
| '''Rationale'''
| Any information that is necessary to convince a critical rational reader that the answer is credible and usable. It presents to a reader the information required to derive the answer and explains how it is formed. It may have different sub-attributes depending on the page type, some examples are listed below.
* '''Data''' tell about direct observations (or expert judgements) about the topic.
* '''Dependencies''' tell what is known about how upstream knowledge crystals (i.e. causal parents) affect the answer. Dependencies may describe functional or probabilistic relationships. In an insight network, dependencies are described as arrows pointing toward the knowledge crystal.
* '''Calculations''' are an operationalisation of how to calculate or derive the answer. It uses algebra, computer code, or other explicit methods if possible.
* '''Discussions''' are structured or unstructured discussions about the details of the substance, or about the production of substantive information. On a wiki, discussions are typically located on the talk page of the substance page.
|----
| Other
| In addition to attributes, it is practical to have clarifying subheadings on a knowledge crystal page. These include: See also, Keywords, References, Related files
|}
 
There are different types of knowledge crystals for different uses. ''Variables'' contain substantive topics such as emissions of a pollutant, food consumption or other behaviour of an individual, or disease burden in a population (for examples, see Figure 3 and Appendix S2.) ''Assessments'' describe the information needs of particular decision situations and work processes designed to answer those needs. They may also describe whole models (consisting of variables) for simulating impacts of a decision. ''Methods'' describe specific procedures to organise or analyse information. The question of a method typically starts with "How to...". For a list of all knowledge crystal types used at Opasnet web-workspace, see Appendix S3.
 
Openness and collaboration are promoted by design: knowledge crystals are modular, re-usable, and readable for humans and machines. This enables their direct use in several assessment models or internet applications, which is important for the efficiency of the work. Methods are used to standardise and facilitate the work across assessments.
 
==== Open web-workspaces ====
 
Insight networks, knowledge crystals, and open assessments are information objects that were not directly applicable at any web-workspace available at the time of development. Therefore, web-workspaces have been developed specifically for open policy practice. There are two major web-workspaces for this purpose: Opasnet (designed for expert-driven open assessments) and Climate Watch (designed for evaluation and management of climate mitigation policies).
 
'''Opasnet
 
''Opasnet'' is an open wiki-based web-workspace and prototype for performing open policy practice, launched in 2006. It is designed to offer functionalities and tools for performing open assessments so that most if not all work can be done openly online. Its name is a short version of ''Open Assessors' Network'' and also from Finnish word for guide, "opas". The purpose was to test and learn co-creation among environmental health experts and start opening the assessment process to interested stakeholders.
 
Opasnet is based on MediaWiki platform because of its open-source code, wide use and abundance of additional packages, long-term prospects, functionalities for good research practices (e.g. talk pages for meta-level discussions), and full and automatic version control. Two language versions of Opasnet exist. English Opasnet (en.opasnet.org) contains all international projects and most scientific information. Finnish Opasnet (fi.opasnet.org) contains mostly project material for Finnish projects and pages targeted for Finnish audiences. A project wiki Heande (short for Health, the Environment, and Everything) requires a password and contains information that can not (yet) be published, but the open alternatives are preferred.
 
Opasnet facilitates simultaneous development of theoretical framework, assessment practices, assessment work, and supporting tools. This includes e.g. information structures, assessment methods, evaluation criteria, and online software models and libraries.
 
For modelling functionalities, the statistical software R is used via an R–Mediawiki interface. R code can be written directly to a wiki page and run by clicking a button. The resulting objects can be stored to the server and fetched later by a different code. Complex models can be run with a web browser without installing anything. The server has automatic version control and archival of the model description, data, code, and results.
 
An R package ''OpasnetUtils'' is available (CRAN repository cran.r-project.org) to support knowledge crystals and impact assessment models. It contains the necessary functions and information structures. Specific functionalities facilitate reuse and explicit quantitation of uncertainties: Scenarios can be defined at a wiki page or via a model user interface, and these scenarios can then be run without changing the model code. If input values are uncertain, uncertainties are automatically propagated through the model using Monte Carlo simulation.
 
For data storage, ''Opasnet Base'', a MongoDB no-sql database, is used. Each dataset must be linked to a single wiki page, which contains all the necessary descriptions and metadata about the data. Data can be uploaded to the database via a wiki page or a file uploader. The database has an open application programming interface for data retrieval.
 
For more details, see Appendix S4.
 
'''Climate Watch
 
[[File:System architecture of Climate Watch.png|thumb|600px|Figure 4. System architecture of the Climate Watch web-workspace.]]
Climate Watch is a web-workspace primarily for evaluating and managing climate mitigation actions (Figure 4). It was originally developed in 2018-2019 by the city of Helsinki for its climate strategy. Already from the beginning, scalability was a key priority: the web-workspace was made generic enough so that it could be easily used by other municipalities in Finland and globally, and used for evaluation and management of other topics than climate mitigation.
 
Climate Watch is described in more detail by Ignatius and coworkers<ref>Ignatius S-M, Tuomisto JT, Yrjölä J, Muurinen R. (2020) From monitoring into collective problem solving: City Climate Tool. EIT Climate-KIC project: 190996 (Partner Accelerator).</ref>. In brief, Climate Watch consists of actions that aim to reduce climate emissions, and indicators that are supposedly affected by the actions and give insights about progress. Actions and indicators are knowledge crystals, and they are causally connected, thus forming an insight network. Each action and indicator has one or more contact people who are responsible for the reporting of progress (and sometimes for actually implementing the actions).
 
The requirements for choosing the technologies were wide availability, ease of development, and an architecture based on open application programming interfaces or APIs. The public-facing user interface uses the NextJS framework (https://nextjs.org/). It provides support for server-side rendering and search engine optimisation which is based on the React user interface framework (https://reactjs.org/). The backend is built using the Django web framework (https://www.djangoproject.com/) which provides the contact people with an administrator user interface. The data flows to the Climate Watch interface over a GraphQL API (https://graphql.org/). GraphQL is a standard that has the most traction in the web development community because of its flexibility and performance.
 
Opasnet and Climate Watch have functional similarities but different technical solutions. The user interfaces for end-users and administrators in Climate Watch have similar purposes as MediaWiki in Opasnet; and while impact assessment and model development are performed by using R at Opasnet, Climate Watch uses Python, Dash, and Jupyter.
 
==== Open policy ontology ====
 
''Open policy ontology'' is used to describe all the information structures and policy content in a systematic, coherent, and unambiguous way. The ontology is based on the concepts of open linked data and resource description framework (RDF) by the World Wide Web Consortium<ref>W3C. Resource Description Framework (RDF). https://www.w3.org/RDF/. Accessed 1 Feb 2020.</ref>.
 
The ontology is based on vocabularies with specified terms and meanings. Also the relations of terms are explicit. Resource description framework is based on the idea of triples, which have three parts: subject, predicate (or relation), and object. These can be thought as sentences: an item (subject) is related to (predicate) another item or value (object), thus forming a claim. Claims can further be specified using qualifiers and backed up by references. Insight networks can be documented as triples, and a set of triples using this ontology can be visualised as diagrams of insight network. Triple databases enable wide, decentralised linking of various sources and information.
 
Open policy ontology (see Appendix S3) describes all information objects and terms described above, making sure that there is a relevant item type or relation to every critical piece of information that is described in an insight network, open assessment, or shared understanding. "Critical piece of information" means something that is worth describing as a separate node, so that it can be more easily found, understood, and used. A node itself may contain large amounts of information and data, but for the purpose of producing shared understanding about a particular decision, there is no need to highlight the node's internal data on an insight network.
 
The ontology was used with indicator production in the climate strategy of Helsinki<ref name="hnh2035">City of Helsinki. The Carbon-neutral Helsinki 2035 Action Plan. Publications of the Central Administration  of the City of Helsinki 2018:4. http://carbonneutralcities.org/wp-content/uploads/2019/06/Carbon_neutral_Helsinki_Action_Plan_1503019_EN.pdf Assessment: https://ilmastovahti.hel.fi. Accessed 1 Feb 2020.</ref> and a visualisation project of insight networks<ref>Tuomisto JT. Näkemysverkot ympäristöpäätöksenteon tukena [Insight networks supporting the environmental policy making](in Finnish) Kokeilunpaikka. Website. https://www.kokeilunpaikka.fi/fi/kokeilu/nakemysverkot-ymparistopaatoksenteon-tukena. Accessed 1 Feb 2020.</ref>.
 
For a full description of the current vocabulary in the ontology, see Appendix S3 and Figures S2-3 and S2-4 in Appendix S2.
 
==== Novel concepts ====
 
This section presents novel concepts that have been identified as useful for a particular need and conceptually coherent with open policy practice. However, they have not been thoroughly tested in practical assessments of policy support.
 
''Value profile'' is a documented list of values, preferences, and choices of a participant. Voting advice applications are online tools that ask electoral candidates about their values, world views, or decisions they would make if elected. The voters can then answer the same questions and analyse which candidates share their values. Nowadays, such applications are routinely developed by all major media houses for every national election in Finland. Thus, voting advice applications produce a kind of value profile. However, these tools are not used to collect value profiles from the public for actual decision making or between elections although such information could be used in decision support. Value profiles are mydata, i.e. data of an individual where they themself can decide who is able see and use it. This requires trusted and secure information systems.
 
''Archetype'' is an internally coherent value profile of an anonymised group of people. Coherence means that when two values are in conflict, the value profile describes which one to prefer. Archetypes are published as open data describing the number of supporters but not their identities. People may support an archetype in full or by declaring partial support to some specific values. Archetypes aim to save effort in gathering value data from the public, as when archetypes are used, not everyone needs to answer all possible questions. It also increases security since there is no need to handle individual people's potentially sensitive value profiles, when open aggregated data about archetypes suffices.
 
Political strategy papers typically contain explicit values of that organisation, aggregated in some way from their members' individual values. The strategic values are then used in the organisation in a normative way, implying that the members should support these values in their membership roles. An archetype differs from this, because it is descriptive rather than normative and a "membership" in an archetype does not imply any rights or responsibilities. Yet, political parties could use also archetypes to describe the values of their members.
 
The use of archetypes is based on an assumption that although their potential number is very large, most of a population's values relevant for a particular policy can be covered with a manageable amount of archetypes. As a comparison, there are usually from two to a dozen significant political parties in a democratic country rather than hundreds. There is also research on human values showing that they can be systematically evaluated using a fairly small amount (e.g., 4, 10, or 19) of different dimensions<ref>Schwartz SH, Cieciuch J, Vecchione M, Davidov E, Fischer R, Beierlein C, Ramos A, Verkasalo M, Lönnqvist J-E. Refining the theory of basic individual values. Journal of Personality and Social Psychology. 2012: 103; 663–688. doi: 10.1037/a0029393.</ref>.
 
''Paradigms'' are collections of rules to describe inferences that participants would make from data in the system. For example, scientific paradigm has rules about criticism and a requirement that statements must be backed up by data or references. Participants are free to develop paradigms with any rules of their choosing, as long as they can be documented and operationalised within the system. For example, a paradigm may state that when in conflict, priority is given to the opinion presented by a particular authority. Hybrid paradigms are also allowed. For example, a political party may follow the scientific paradigm in most cases but when economic assessments are ambiguous, the party chooses an interpretation that emphasises the importance of an economically active state (or alternatively market approach with a passive state).
 
''Destructive policy'' is a policy that a) is actually being implemented or planned, making it politically relevant, b) causes significant harm to most or all stakeholder groups, as measured using their own interests and objectives, and c) has a feasible, less harmful alternative. Societal benefits are likely to be greater if a destructive policy is identified and abandoned, compared with a situation where an assessment only focuses on showing that one good policy option is slightly better than another one.
 
There are a few mechanisms that may explain why destructive policies exist. First, a powerful group can dominate the policymaking to their own benefit, causing harm to others. Second, the "prisoner's dilemma" or "tragedy of commons" makes a globally optimal solution to be unoptimal for each stakeholder group, thus draining support from it. Third, the issue is so complex that the stability of the whole system is threatened by changes<ref>Bostrom N. (2019) The Vulnerable World Hypothesis. Global Policy 10: 4: 455-476. https://doi.org/10.1111/1758-5899.12718.</ref>. Advice about destructive policies may produce support for paths out of these frozen situations.
 
An analysis of destructive policies attempts to systematically analyse policy options and identify, describe, and motivate rejection of those that appear destructive. The tentative questions for such an analysis include the following.
* Are there relevant policy options or practices that are not being assessed?
* Do the policy options have externalities that are not being assessed?
* Are there relevant priorities among stakeholders that are not being assessed?
* Is there strong opposition against some options among the experts or stakeholders? What is the reasoning for and science behind the opposition?
* Is there scientific evidence that an option is unable to reach the objectives or is significantly worse than another option?
 
The current political actions to mitigate the climate crisis are so far from the global sustainability goals that there must be some destructive policies in place. Identification of destructive policies often requires that an assessor looks out of the box and is not restricted to default research questions. In this example, such questions could be: "What is such a policy B that fulfils the objectives of the current policy A but with less climate emissions?", and "Can we reject the null hypothesis that A is better than B in the light of data and all major archetypes?" This approach has a premise that rejection is more effective than confirmation; an idea that was already presented by Karl Popper<ref name="popper1963"/>.
 
Parts of open policy practice have been used in several assessments. In this article, we will evaluate how these methods have performed.


[[image:Shared understanding.png|thumb|250px|Figure 4. Shared understanding is produced in collaboration by decision makers (managers), experts (assessors), and stakeholders. Each group brings in their own knowledge and concerns.]]
== Methods ==
Shared understanding is a situation where all participants' views have been described and documented well enough so that people can know what facts, opinions, reasonings, and values exist about a particular topic; and what agreements and disagreements exist and why.


Shared understanding is always about a particular topic and produced by a particular group of participants. With another group it could be different, but with increasing number of participants, it should approach shared understanding of the whole society. Each participant should agree that the written description correctly contains their own thinking about the topic. Participants should even be able to correctly explain what other thoughts there are and how they differentiate from their own. Ideally any participant can learn, understand, and explain any thought represented in the group. Importantly, there is no need to agree on things, just to agree on what the disagreements are about. Therefore, shared understanding is not the same as consensus or agreement.
The methods of open policy practice were critically evaluated. The open assessments performed (Appendix S1) were used as the material for evaluation. The properties of good policy support (Table 3) were used as evaluation criteria in a similar way as in a previous evaluation<ref name="sandstrom2014"/>. In addition, open policy practice as a whole was evaluated using the categories of interaction (Table 6) and the test of shared understanding (Table 2) as criteria<ref name="pohjola2014"/>. Key questions in the evaluations were the following. Does open policy practice have the properties of good policy support? And does it enable policy support according to the five principles of open policy practice in Table 1?  For each method within open policy practice, these questions were asked: In what way could the method materialise improvements in the property considered? Are there evidence or experiences showing that improvement has actually happened in practice? Has the method shown disadvantages or side effects when implemented?


Shared understanding has potentially several purposes that all aim to improve the quality of societal decisions. It helps people understand complex policy issues. It helps people see their own thoughts from a wider perspective and thus increase acceptance of decisions. It improves trust in decision makers; but it may also deteriorate trust if the actions of a decision maker are not understandable based on shared understanding. It dissects each difficult detail into separate discussions and then collects resolutions into an overview; this directs the time resources of participants efficiently. It improves awareness of new ideas. It releases the full potential of the public to prepare, inform, and make decisions. How well these purposes have been fulfilled in practice in assessments will be discussed in Results.
== Results ==


''Test of shared understanding'' can be used to evaluate how well shared understanding has been achieved. In a successful case, all participants of the decision process will give positive answers to the questions in Table 7. In a way, shared understanding is a metric for evaluating how well decision makers have embraced the knowledge base of the decision situation.
Different methods of open policy practice were evaluated for their potential or observed advantages and disadvantages according to the properties of good policy support. Major advantages are listed on Table 8. Some advantages, as well as disadvantages and problems, are discussed in more detail in the text. The text is organised along the properties of good policy support, categories of interaction, and test of shared understanding.  


{| {{prettytable}}
{| {{prettytable}}
|+'''Table 7. Test of shared understanding.
|+'''Table 8. Methods evaluated based on properties of good policy support. Background colours: white: no anticipated benefit, yellow: potential benefit, green: actual benefit observed in open policy practice materials. Numbers in parentheses refer to the assessments in Appendix S1, Table S1-1. The last row contains general suggestions to improve policy support with respect to each property.
! Question !! Who is asked?
|----
! Method
! Quality of content
! Relevance
! Availability
! Usability
! Acceptability
! Efficiency
|----
| Co-creation
| style="background: #74AF592F|Participants bring new info (2, 3, 25, 26)
| style="background: #74AF592F|New questions are identified during collaborative work (6, 11)
| style="background: #74AF592F|Draft results raise awareness during work (2, 8, 27)
| style="background: #FBB8482F|Readers ask clarifying questions and learn and create understanding through collaboration
| style="background: #74AF592F|Participants are committed to conclusions (2, 8, 27)
| style="background: #FBB8482F|Collaboration integrates communication to decision makers and stakeholders (users) into the making, which saves time and effort
|----
| Open assessment
| style="background: #74AF592F| It combines functionalities of other methods and enables peer-reviewed assessment models (4, 5, 16)
| style="background: #74AF592F| End-user discussions improve assessment (16, 26, 27)
| style="background: #FBB8482F| It is available as draft since the beginning
| style="background: #74AF592F| Standard structure facilitates use (8-9)
| style="background: #74AF592F| Openness was praised (3, 8, 9, 21)
| style="background: #74AF592F| Scope can be widened incrementally (12-16)
|----
| Insight network
| style="background: #74AF592F| It brings structure to assessment and helps causal reasoning (8, 9, 11, 16, 17)
| style="background: #74AF592F| It helps and clarifies discussions between decision makers and experts (8, 9)
|
| style="background: #FBB8482F| Readers see what is excluded
| style="background: #FBB8482F| It helps to check whether important issues are missing
|
|----
| Knowledge crystal
| style="background: #74AF592F| They streamline work and provide tools for quantitative assessments (e.g. 3, 23, 24)
| style="background: #74AF592F| They clarify questions (1, 6)
| style="background: #FBB8482F| It is mostly easy to see where information should be found
| style="background: #FBB8482F| Summaries help to understand
| style="background: #FBB8482F| They make the intentionality visible by describing the assessment question
| style="background: #74AF592F| Answers can be reused across assessments (12–16, 23-24)
|----
| Web-workspace
| style="background: #74AF592F|Its structure supports high-quality content production when moderated (8, 9)
| style="background: #74AF592F|It combines user needs and open policy practice (8, 9)
| style="background: #74AF592F|It offers an easy approach to and archive of materials (16, 21, 23, 26)
| style="background: #74AF592F|The user needs guided the functions developed (8)
|
| style="background: #FBB8482F|It offers a place to document shared understanding and distribute information broadly.
|----
| Structured discussion
| style="background: #74AF592F| It helps to moderate discussion and discourages low-quality contributions (2, 30)
| style="background: #74AF592F| It guides focus on important topics (16, 30)
|
| style="background: #FBB8482F| Threads help to focus reading
| style="background: #74AF592F| User feedback has been positive: it helps to focus on key issues (8, 30)
| style="background: #FBB8482F| Structure discourages redundancy
|----
| Open policy ontology
|
| style="background: #74AF592F|It gives structure to insight networks and structured discussions (8, 16, 30)
|
| style="background: #FBB8482F|Ontology clarifies issues and relations
|
|
|----
|----
| Is all relevant and important information described?
| Value profile and archetype
|rowspan="4"|All participants of the decision processes.
|  
| style="background: #74AF592F| Value profiles help to prioritise (8)
|
| style="background: #FBB8482F| Voting advice applications may offer an example
| style="background: #FBB8482F| Stakeholders' values are better heard
| style="background: #FBB8482F| Archetypes are effective summaries
|----
|----
| Are all relevant and important value judgements described? (Those of all participants, not just decision makers.)
| Paradigm
| style="background: #FBB8482F| It motivates clear reasoning
| style="background: #FBB8482F| It systematically describes conflicting reasonings
|
|
| style="background: #FBB8482F| Stakeholders' reasonings are better heard
| style="background: #FBB8482F| It helps to analyse inferences of different groups
|----
|----
| Are the decision maker's decision criteria described?
| Analysis of destructive policies
|
| style="background: #74AF592F| It widens the scope (3, 8)
|
| style="background: #FBB8482F| It emphasises mistakes to be avoided
| style="background: #FBB8482F| Focus is on everyone's problems
| style="background: #FBB8482F| Lessons learned can be reused in other decisions
|----
|----
| Is the decision maker's rationale from the criteria to the decision described?
| '''Suggestions by open policy practice
| Work openly, invite criticism. Use tools and moderation to encourage high-quality contributions (Table 1.)
| Acknowledge the need for and potential of co-creation, discussion, and revised scoping. Invite all to policy support work. Characterize the setting (Table 4.)
| Design processes and information to be open from the beginning. Use open web-workspaces. (Table 5.)
| Invite participation from the problem owner and user groups early on. Use user feedback to visualise, clarify, and target content (Table 6.)
| Be open. Clarify reasoning. Acknowledge disagreements. Use the test of shared understanding (Table 2.)
| Combine information production, synthesis, and use to a co-creation process to save time and resources. Use shared information objects with open license, e.g. knowledge crystals.
|}
|}


Shared understanding may be viewed from two different levels of ambition. On an easier level, shared understanding is taken as general guidance and an attitude towards other people's opinions. Main points and disagreements are summarised in writing, so that an outsider is able to understand the overall picture.  
=== Quality of content ===
 
Open policy practice aims at high-quality information for decision makers. One of the ideas is that openness and co-creation enable external experts to see and criticise the content at all times so that corrections can be made. Participation among decision makers, stakeholders, and experts outside an assessment team is typically less common than ideally and requires special effort. The participation has been remarkably higher in projects where special emphasis and effort has been put to dissemination and facilitation, such as the Climate Watch and the Transport and communications strategy (assessments 8 and 26 in Table S1-1). Resources should be allocated to facilitation already when planning a policy process to ensure useful co-creation. 
 
Participation is a challenge also in Wikipedia, where only a few percent of readers ever contribute, and the fraction of active contributors is even smaller<ref name="wikipedians">Wikipedia: Wikipedians. https://en.wikipedia.org/wiki/Wikipedia:Wikipedians. Accessed 1 Feb 2020</ref>. Indeed, the quality of content in Wikipedia is better in topics that are popular and have a lot of contributors.
 
Active participation did not solve quality control on behalf of the assessors, and it had to be taken care of by usual means. In any case, open policy practice does not restrict the use of common quality control methods and therefore it has at least the same potential to produce high-quality assessments as those using the common methods. The quality of open assessments has been acceptable for publishing in peer-reviewed scientific journals.
 
=== Relevance ===
 
What is relevant for a decision process can be a highly disputed topic. The shared interaction implies that stakeholders can and should participate in discussions about relevance and revision of scoping when necessary. In other words, everyone is invited to policy support work. The setting of an assessment (Table 4) helps participants to see, what the assessment is about.
 
The analysis of destructive policies can be used as a method to focus on critical aspects of an assessment and thus increase relevance. For example, Climate Watch has an impact assessment tool<ref>Climate Watch. Impact and scenario tool. https://skenaario.hnh.fi/. Website. Accessed 1 Feb 2020.</ref> that dynamically simulates the total greenhouse gas emissions of Helsinki based on scenarios provided by the user. The tool is able to demonstrate destructive policies: for example, if the emission factor of district heating production does not significantly decrease in ten years, it will be impossible to reach the emission targets of Helsinki. Thus, there are sets of solutions that could be chosen because of their appealing details but that would not reduce the emission factor. The tool explicitly demonstrates that these solutions fail to reach the objectives. It also demonstrates that the emission factor is a critical variable that must be evaluated and managed carefully to avoid destructive outcomes.
 
Other examples include the Helsinki energy decision assessment (assessment 3 in Table S1-1). It showed that residential wood combustion was a devastating way to heat houses in urban areas and health risks were much larger than with any other heating method. Yet, this is a popular practice in Finland, and there is clearly a need for dissemination about this destructive practice. Also, a health benefit–risk assessment showed that whatever policy is chosen with dioxins and young women, it should not reduce Baltic fish consumption in other population subgroups (assessment 16 in Table S1-1). This is because the dioxin health risk, while small, is concentrated in the population subgroup of young women, while all other subgroups would clearly benefit from increased fish intake.
 
=== Availability ===
 
The tools and web-workspaces presented in this article facilitated availability of information. In addition, many policy processes were designed in such a way that information was open from the beginning. Increased openness in the society has increased demands to make information available in situations where experts used to keep details to themselves. For example, source codes of assessment models have increasingly been made openly available, and Opasnet made that possible for these assessments.
 
Timing of availability is critical in a policy process, and assessment results are preferably available early on. This is a major challenge, because political processes may proceed rapidly and change focus, and quantitative assessments take time. A positive example of agility was a dioxin assessment model that had been developed in several projects during a few years (assessment 16 in Table S1-1)<ref name="goherr2020"/>. When European Food Safety Authority released their new estimates about dioxin impacts on sperm concentration<ref>EFSA. Risk for animal and human health related to the presence of dioxins and dioxin‐like PCBs in feed and food. EFSA Journal 2018;16:5333. https://doi.org/10.2903/j.efsa.2018.5333</ref>, the assessment model was updated and new sperm concentration results were produced within days. This was possible because the existing dioxin model was modular and using knowledge crystals, so it was rerun after updates in just one part about sperm effects.
 
Availability of previous versions may be critical. Many experts were reluctant to make their texts available in draft assessments if other people were able edit them, but this fear was often alleviated by the fact that previous versions were always available if needed in Opasnet version control. Availability was also improved as information was produced in a proper format for archiving, backups were produced automatically, and it was easy to produce a snapshot of a final assessment. It was not necessary to copy information from one repository to another, but in a few cases, the final assessments were stored in external open data repositories.
 
In structured discussion, hierarchical threads increased availability, because a reader did not need to read further if they agreed with the topmost arguments (assessment 30 in Table S1-1). On the other hand, any thread could be individually scrutinised to the last detail if needed.
 
=== Usability ===
 
Co-creation activities demonstrated the utility of participation and feedback (assessments 6, 8, Table S1-1). Even with good substance knowledge, an assessor cannot know the aspects and concerns a decision maker may have. Usability of information was clearly improved when problem owners and user groups were invited to participate early on. User feedback proved to be very useful to visualise, clarify, and target content.
 
The climate strategy of Helsinki (assessment 8, Table S1-1) took the usability challenge seriously and developed Climate Watch website from scratch based on open source code modules and intensive user testing and service design. Insight networks and knowledge crystals were basic building blocks of the system architecture. It received almost exclusively positive feedback from both users and experts. Also, a lot of emphasis was put on building a user community, and city authorities, other municipalities, and research institutes have shown interest in collaboration. In contrast, Opasnet was designed as generic tool for all kinds of assessments but without an existing end-user demand. As a result, the penetration of Climate Watch has been much quicker.
 
Insight network provides a method to illustrate and analyse a complex decision situation, while knowledge crystals offer help in describing quantitative nuances within the nodes or arrows, such as functional or probabilistic relations or estimates. There are tools with both graphical and modelling functionalities, e.g. Hugin (Hugin Expert A/S, Aalborg, Denmark) for Bayesian belief networks and Analytica® (Lumina Decision Systems Inc, Los Gatos, CA, USA) for Monte Carlo simulation. However, these tools are designed for a single desktop user rather than for open co-creation. In addition, they have limited possibilities for adding non-causal nodes and links or free-format discussions about the topics.
 
Insight networks were often complex and therefore better suited for detailed expert or policy work rather than for general dissemination. Other dissemination methods were needed as well. This was true also for knowledge crystals, although page summaries helped dissemination.
 
A knowledge crystal is typically structured so that it starts with a summary, then describes a research question and gives a more detailed answer, and finally provides a user with relevant and increasingly detailed information in a rationale. This increased the usability of a page among different user groups. On the other hand, some people found this structure confusing as they did not expect to see all the details of an assessment. Users were unsure about the status of a knowledge crystal page and whether some information was up to date or still missing. This was because many pages were work in progress rather than finalised products. This was clarified by adding status declarations on the tops of pages. Declaring drafts as drafts helped also experts who were uncomfortable in showing their own work before it was fully complete.
 
Voting advice applications share properties with value profiles and archetypes, and offer material for concept development. The popularity of these applications implies that there is a societal need for value analysis and aggregation. The data has been used to understand differences between individuals and political groups in Finland. With more nuanced data, a set of archetypes can probably be developed to describe common and important values in the population. Some of them may have potential to increase in popularity and form kind of virtual parties that represent population's key values.
 
Value profiles and paradigms were tested on structured discussions and shared understanding descriptions (assessment 30, Table S1-1). Also Helsinki tested value profiles in prioritising the development of Climate Watch. They were found to be promising and conceptually sound ideas in this context. Data that resembles value profiles are being collected by social media companies, but the data are used to inform marketing actions, often without the individual's awareness, so they are not mydata. In contrast, the purpose of value profile data is to inform societal decisions with consent from its owner rather than nudge the voter to act according to a social media company's wishes. The recent microtargeting activities by Cambridge Analytica and AggregateIQ to use value-profile-like data proved to be very effective in influencing voting decisions<ref name="ukparliament2019">UK Parliament. (2019) Disinformation and 'fake news': Final report. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/179102.htm. Accessed 1 Feb 2020</ref>. Value profiles are clearly severely underutilised as a tool to inform decisions. We are not aware of systems that would collect value profile data for actual democratic policy support between elections.
 
=== Acceptability ===
 
A major factor increasing acceptability was whether the stakeholders thought that they had been given all relevant information and whether their concerns had been heard. This emphasised the need to be open and clarify reasonings of different stakeholders. It was also found important to acknowledge disagreements. The test of shared understanding (Table 2.) appeared to be a useful tool in documenting these aspects.
 
Experts were often reluctant to participate in open assessments because they had concerns about the acceptability of the process. They thought that expertise is not given proper weight, if open participation is allowed. They feared that strong lobbying groups hijack the process. They feared that self-organised groups produce low-quality information or even malevolent dis-information. They often demanded the final say as the ultimate quality criteria, rather than trusted that data, reasoning, and critical discussion would do a better job. In brief, experts commonly thought that it is simply easier and more efficient to produce high-quality information in closed expert groups.
 
In a vaccine-related assessment (2, Table S1-1), comments and critique were received from both drug industry and vaccine citizen organisations by using active facilitation, and they were all very matter-of-fact. This was interesting, as the same topics caused outrage in social media, but this was not seen on structured assessments. This was possibly because the questions asked were specific and typically required some background knowledge of the topic. Interestingly, one of the most common objections and fears against open assessment was that citizen contributions are ill-informed and malevolent. The experience with open assessments showed that they were not.
 
=== Efficiency ===
 
Open policy practice combines information production, synthesis, and use to a single co-creation endeavour covering a whole policy process. When successful, this approach saved time and resources because of parallel work and rapid feedback and guidance. However, not all open assessments were optimally designed to maximise co-creation between decision makers and experts. Rather, efficiency was typically achieved when knowledge crystals improved structure and reuse and thus saved resources in assessment modelling.
 
A common solution to co-operation needs seemed to be a strict division of tasks. Detailed understanding of and contributions to other groups' work and models remained low or non-existent. This was typical in large assessment projects (assessments 4, 5, 7, Table S1-1). On the other hand, most researchers were happy in their own niche and did not expect that other experts could or should learn the details of their work. Consequently, the perceived need for shared tools or open data was often low, which hindered mutual sharing, learning, and reuse.
 
The implementation phase of Climate Watch, which started in December 2018, involved also citizens, decision-makers, and other municipalities. It was the largest case study so far using open policy practice. It combined existing and produced new climate emission models for municipalities. A long-term objective was to collect detailed input data ideally about the whole country and offer all models to all municipalities, thus maximising reuse.
 
An important skill in open policy practice was to learn to identify important pieces of relevant information (such as scientific facts, publications, discussions etc.) and to add that information into a proper place in an insight network by using open policy ontology and a reasonable amount of work. The more there was user need for a piece of information, the more time was worth producing it. An ontology helped to do this in practice so that the output was understandable for both humans and computers.
 
Accumulation of scientific merit was a key motivator for researchers. Policy support work typically did not result in scientific articles. When researchers evaluated the efficiency of their own work, they preferred tasks that produced articles in addition to societal benefit. The same reasoning was seen with open assessments and knowledge crystals, resulting in reluctance to participate. Win-win situations could be found, if policy processes were actively developed into containing research aspects, so that new information would be produced for decision makers but also for scientific audiences.
 
=== Categories of interaction ===
 
Assessment methods have changed remarkably in forty years. During the last decades, the trend has been from isolated to more open approaches, but all categories of interaction (Table 6) are still in use<ref name="pohjola2012"/>. The trend among the open assessments (Appendix S1) seemed also to go for more participatory processes. Enabling participation was not enough, as interaction required facilitation and active invitation of decision-makers, experts, and stakeholders. Although openness and participation were available in all the open assessments in theory, only a minority of them actually had enough resources for facilitation to realise good co-creation in practice. In the first open assessments in the early 2000's, people were not familiar even with the concepts of co-creation. In recent examples, especially in the Helsinki climate strategy (assessment 8, Table S1-1), co-creation and openness were insisted by decision-makers, civil servants, and experts alike. There was also political will to give resources for co-creation and facilitation. This resulted in actual shared interaction between all groups.
 
The example in Helsinki produced interest and enthusiasm in both climate activists and other municipalities. The activists started to self-organise evaluation and monitoring using Climate Watch and ask explanations from civil servants whose actions were delayed. Several municipalities expressed their interest to start using Climate Watch in their own climate work, thus indicating that they had adopted the principles of openness and collaboration. This implies that although the popularity of co-creation increased slowly during previous years, good experiences and awareness increase the rate of change, thus resulting in supra-linear progress in interaction.
 
=== Test of shared understanding ===
 
Shared understanding clarified complex issues and elicited implicit valuations and reasonings in the open assessments. It facilitated rational discussion about a decision and explicated values of stakeholders e.g. about vaccines (assessments 1, 2 in Table S1-1). It also created political pressure against options that were not well substantiated, e.g. about health effects of food (assessment 31, Table S1-1). Shared understanding was approached even when a stakeholder was ignorant of or even hostile to new insights, or not interested in participating, such as in trip aggregation assessment or health benefit-risk assessment of Baltic fish (assessments 11 and 16, Table S1-1). Then, there was an attempt to describe stakeholders' views based on what other people know about their values. Everyone's views are seen as important policy-relevant information that may inform decision making.
 
Shared understanding was a well accepted idea among many decision makers in Finland. This was observed in collaboration with Prime Minister's Office of Finland (assessment 27, Table S1-1). Many civil servants in ministries liked the idea that sometimes it is better to aim to understanding rather than consensus. They soon adopted the easy version of the term and started to use it in their own discussions and publications<ref>Dufva M, Halonen M, Kari M, Koivisto T, Koivisto R, Myllyoja J. Kohti jaettua ymmärrystä työn tulevaisuudesta [Toward a shared understanding of the future of work]. Helsinki: Prime Minister's Office: Publications of  the Govenrment's analysis, assessment and research activities 33; 2017. (in Finnish) http://tietokayttoon.fi/julkaisu?pubid=18301. Accessed 1 Feb 2020.</ref><ref>Oksanen K. Valtioneuvoston tulevaisuusselonteon 1. osa. Jaettu ymmärrys työn murroksesta [Government Report on the Future Part 1. A shared understanding of the transformation of work] Prime Minister’s Office Publications 13a; 2017. (in Finnish) http://urn.fi/URN:ISBN:978-952-287-432-0. Accessed 1 Feb 2020.</ref>.
 
However, shared understanding was not unanimously accepted. Experts were often reluctant to start scientific discussions with citizens, especially if there were common or strong false beliefs about the topic among the public. In such cases, a typical argument was that the role of an expert is to inform and, if possible, suppress false beliefs rather than engage in producing common descriptions about differing views. The target seemed to be to convince the opponent rather than increase understanding among the audience.
 
The test of shared understanding was a useful tool to recognise when not all values, causal chains or decision makers' rationale were known and documented. Yet, lack of time or resources often prevented further facilitation, information collection, or expansion of the scope of an assessment.
 
== Discussion ==
 
This article presents methods and tools designed for decision support. Many of them have already been successfully used, while others have been identified as important parts of open policy practice but have not been extensively tested.
 
The discussion is organised around the five principles of open policy practice: collaboration, openness, causality, criticism, and intentionality. The principles are looked at in the light of popularity, acceptance, and lessons learned from practical experience.
 
The five principles are not unique for open policy practice; on the contrary, they have been borrowed from various disciplines (for reviews, see <ref name="pohjola2012"/><ref name="pohjola2013"/>). The aim was to use solid principles to build a coherent set of methods that gives practical guidance to decision support. It is reassuring that many principles from the original collection<ref name="ora2007"/> have increased in popularity in the society. There are also studies comparing parts of open policy practice to other existing methods<ref>Pohjola MV, Pohjola P, Paavola S, Bauters M, Tuomisto JT. (2011) Pragmatic knowledge services. Journal of Universal Computer Science 17, 472-497. https://doi.org/10.3217/jucs-017-03-0472.</ref>
 
The results showed that the methods connected the five principles quite well to the properties of good policy support (Table 8). Open collaboration indeed resulted in high-quality content when knowledge crystals, web-workspaces and co-creation were utilised. End-user interaction and structured discussions helped to revise scoping and content, thus improving relevance and usability. Acknowledging disagreements and producing shared understanding created acceptability. And openly shared information objects such as data and models improved availability and efficiency.  


On an ambitious level, the idea of documenting all opinions and their reasonings is taken literally. Participants' views are actively elicited and tested to see whether a facilitator is able to reproduce their thought processes. The objective here is to document the thinking in such a detailed way that a participant's response can be anticipated from the description they have given. The purpose is to enable effective participation via documentation, without a need to be present in any particular hearing or other meeting.
The experiences about open policy practice demonstrate that it works as expected when the participants are committed to '''collaborate''' using the methods, practices, and tools. However, there have been less participants in most open assessments than what had been hoped for. This can partly be affected by own actions, as reader and contributor numbers clearly went up with active facilitation or marketing with large media coverage and public interest. Some other reasons cannot be easily affected directly, such as inertia to change established practices or lack of scientific merit. Thus, a major long-term challenge is to build an attractive assessor community, culture, and incentives for decision support.


Written documentation with an available and usable structure is crucial in spreading shared understanding among those who were not involved in discussions. Online tools such as wikis are needed especially in complex topics, among large groups, or if the ambition level is high.
The GovLab in New York is an example of such activity (www.thegovlab.org). They have expert networks, training, projects, and data sources available to improve policy support. There is a need for similar tools and training designed to facilitate a change elsewhere. New practices could also be promoted by developing ways to give scientific — or political — merit and recognition more directly based on online co-creation contributions. The current publication counts and impact factors — or public votes — are very indirect measures of scientific or societal importance of the information or policies produced.


Good assessment models are able to quickly and easily incorporate new information or scenarios, so that they can be run again and again and learn from these changes. In a similar way, a comprehensive shared understanding can incorporate new information from the participants. A user should be able to quickly update the knowledge base, change the point of view, or reanalysise how the situation would look like with alternative valuations. Such level of sophistication necessarily requires a few concepts that have not yet been described.
Knowledge crystals offer a collaboration forum for updating scientific understanding about a topic in a quicker and easier way than publishing scientific articles. Knowledge crystals are designed to be updated based on continuous discussion about the scientific issues (or valuations, depending on the topic) aiming to back up conclusions. In contrast, scientific articles are expected to stay permanently unchanged after publication. Articles offer little room for deliberation about the interpretation or meaning of the results after a manuscript is submitted: reviewer comments are often not published, and further discussion about an article is rare and mainly occurs only if serious problems are found. Indeed, the current scientific publishing system is poor in correcting errors via deliberation<ref>Allison DB, Brown AW, George BJ, Kaiser KA. Reproducibility: A tragedy of errors. Nature 2016;530:27–29. doi:10.1038/530027a</ref>.  


'''Value profile'''
Shared understanding is difficult to achieve if the decision maker, media environment, or some political groups are indifferent about or even hostile against scientific knowledge or public values. For many interest groups, non-public lobbying, demonstrations and even spreading faulty information are attractive ways of influencing the outcome of a decision. These are problematic methods from the perspective of open policy practice, because they reduce the availability of important information in decision processes.


''Value profile'' is a list of values, preferences, and choices made and documented by a participant. Voting advice applications produce a kind of value profiles. The candidates answer questions about their values, worldviews, or decisions they would do if elected. The public can then answer the same questions and analyse which candidates share their values. Nowadays, such applications are routinely developed by all major media houses for every national election in Finland. However, these tools are not used to collect value profiles from the public between elections although such information could be used in decision support. Value profiles are mydata, i.e. data about which an individual themself may decide who is allowed to see and use it. This requires trusted and secure information systems.
Further studies are needed on how open, information-based processes could be developed to be more tempting to groups that previously have preferred other methods. A key question is whether shared understanding is able to offer acceptable solutions to disagreeing parties and alleviate political conflict. Another question is whether currently under-represented groups have better visibility in such open processes. Also, more information is needed about how hostile contributions get handled, when they occur; fortunately, they were very rare in the open assessments.


'''Archetype'''
There is no data about open policy practice usage in a hostile environment. Yet, open policy practice can be collaboratively used even without support from a decision maker or an important stakeholder. Although their objectives values are important for an assessment, these may be either deduced indirectly from their actions, or even directly replaced by the objectives of the society at large. Thus, open policy practice is arguably a robust set of methods that can be used to bypass non-democratic power structures and focus on the needs of the public even in a non-optimal collaboration environment.


''Archetypes'' are collections of values that are shared by a group, so that an individual can explicate their own values by simply saying that they are equal to those of an archetype. For example, political parties can develop archetypes to describe their political agendas and programs. Of course, individuals may support an archetype in most issues but diverge in some and document those separately. In this way, people may easily document value profiles in a shared understanding system. Practical tools for this do not yet exist, so little is known about how practical and accepted they would be.  
There is still a lot to learn about using co-created information in decision making. Experiences so far have demonstrated that decision making can be more evidence-informed than what it typically is, and several tools promoting this change are available.


Archetypes aim to save effort in gathering value data from the public, as not everyone needs to answer all possible questions, when archetypes are used. It also increases security when there is no need to handle individual people's answers but open aggregated value data instead.
'''Openness''' in science is a guiding principle and current megatrend, and its importance has been accepted much more widely during recent years. Yet, the practices in research are changing slowly, and many current practices are actually in conflict with openness. For example, it is common to hide expert work until it has been finalised and published, to publish in journals where content is not freely available, and to not open the data used.  


'''Paradigm'''
A demand to produce assessments openly and describe all reasoning and data already from the beginning was often seen as an unreasonable requirement and made experts reluctant to participate. This observation raised two opposite conclusions: either that openness should be incentivised and promoted actively in all research and expert work<ref name="tsv2020">Federation of Finnish Learned Societies. (2020) Declaration for Open Science and Research (Finland) 2020-2025. https://avointiede.fi/fi/julistus. Accessed 1 Feb 2020</ref>, including decision support; or that openness as an objective hinders expert work and should be rejected. The latter conclusion was strong among experts in the early open assessments, but the former one has gained popularity.


''Paradigms'' are collections of rules to make inferences from data in the system. For example, scientific paradigm has rules about criticism and priority over hypotheses that are in accordance with data and rational reasoning. However, participants are free to develop paradigms with any rules of their choosing, as long as they can be documented and operationalised within the system. For example, a paradigm may state that in a conflict, priority is given to the opinion presented in a holy book. And in a hypothetical case where there is disagreement about what the holy book says, separate sub-paradigms may be developed for each interpretation. Mixture paradigms are also allowed. For example, a political party may follow the scientific paradigm in most cases but when economic assessments are ambiguous, the party will choose an interpretation that emphasises the importance of an economically active state (or alternatively market approach with a passive state).
There are several initiatives to open scientific processes, such as Open Science Foundation (www.osf.io). These are likely to promote change in science at large and indirectly also in scientific support of decision making.


This work is based on an idea that although the number of possible values and reasoning rules is very large, most of people's thinking can be covered with a fairly small amount of archetypes and paradigms. As a comparison, there are usually from two to a dozen parties in a democratic country rather than hundreds. In other words, it is putatively an efficient system. If this is true, resources can be used to get the critical issues precise and informative, and then apply these information objects in numerous practical cases.
Among experts, '''causality''' was seen as a backbone of impact modelling. In political arenas, causal discourse was not as prominent, as it was often noticed that there was actually little solid information about the most policy-relevant causal chains, and therefore values dominated policy discussions. Climate Watch was the most ambitious endeavour in the study material to quantify all major causal connections of a climate action plan. The approach was supported by the city administration and stakeholders alike. Causal quantification created an additional resource need that was not originally budgeted. It is not yet known, how Helsinki, other cities, and research institutes will distribute the resources and tasks of causal modelling and information produced. Yet, actions in the national energy and climate plans total 260 billion euro per year in EU<ref>European Commission. (2019) Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions united in delivering the energy union and climate action - setting the foundations for a successful clean energy transition. COM/2019/285 final https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52019DC0285. Accessed 1 Feb 2020.</ref>. So, even minor improvements in the efficiency or effectiveness of climate actions would make causal assessments worthwhile.


It should be emphasised that although we have used wording such as "make inferences in the system", this approach does not need to rely on artificial intelligence. Rather, numerous contributors who co-create content and make inferences based on the rules described in paradigms, are the "system" we refer to. Parts of the work described here may be automated in the future, but the current system is mostly based on human work.
'''Criticism''' has a central role in the scientific method. It is applied in practical situations, because rejecting poor statements is easier and more efficient than trying to prove statements true<ref name="popper1963"/>. Most critique in open assessments was verbal or written discussion between participants, focussing on particular, often detailed topics. Useful information structures have been found for criticism, notably structured discussions that can target any part of an assessment (scope, data, premises, analyses, structure, results etc).


By now it seems clear that information in a description of shared understanding is very complex and often very large. So, a new research question emerges: how can all this information be written down and organised in such a way that it can easily be used and searched by both a human and a computer? A descriptive book would be too long for busy decision makers and unintelligible for computers. An encyclopedia would miss relevant links between items. A computer model would be a black box for humans.
The current practices of open criticism in research are far from optimal, as criticism rarely happens. Pre-publishing peer review is almost the only time when scientific work is criticised by people outside the research group, and those are typically not open. A minute fraction of published works are criticised openly in journals; a poor work is simply not cited and subsequently forgotten. Interestingly, some administrative processes follow scientific principles better than many research processes do: for example, environmental impact assessment has a compulsory process for open criticism at both design and result phases<ref name="yva">European Parliament. Directive 2014/52/EU of the European Parliament and of the Council of 16 April 2014 amending Directive 2011/92/EU on the assessment of the effects of certain public and private projects on the environment Text with EEA relevance. https://eur-lex.europa.eu/eli/dir/2014/52/oj Accessed 1 Feb 2020.</ref>.


'''Ontology'''
'''Intentionality''' requires that the objectives and values of stakeholders in general and decision makers in particular are understood. In the studied assessments, some values were always identified and documented. But it was not common to systematically describe all relevant values, or even ensure that the assessed objectives were actually the most important ones for the decision maker. There is clearly a need prioritise facilitation and interaction about values.


We suggest that in addition to all other information structures presented above, there is a need for an ''ontology'' that would acknowledge these information structures, special needs of decision situations, informal nature of public discussions, and computational needs of quantitative models. Such an ontology or vocabulary would go a long way in integrating readability for humans and computers.
In shared understanding, some claims were found unsubstantiated or clearly false. On the societal level, open policy practice aimed to increase political pressure against decisions based on poor ideas by explicating the problems and informing the public about them. The purpose was not to pressure individuals to reject their unsubstantiated thoughts. Personal beliefs were understood rather than threatened, because the aim was to build acceptance and facilitated contributions. However, it is not known what happens with very sensitive personal topics, because there were no such issues in the studied assessments.


World Wide Web Consortium has developed the concepts of open linked data and resource description framework<ref>W3C. Resource Description Framework (RDF). https://www.w3.org/RDF/. Accessed 24 Jan 2018.</ref>. These have been used as the main starting points for ontology development. This work is far from final, but we will present the current version.
Politics in western democracies is typically based on a premise that ultimately the citizens decide about things by voting. Therefore, in a sense, people can not vote "wrong". In contrast, open policy practice is based on a premise that the objectives of the citizens are the ultimate guiding principle, and it is a matter of discussion, assessment, and other information work to suggest which paths should or should not be taken to reach these objectives. This thinking is close to James Madison's ideas about democracy in Federalist 63 from 1788.<ref>James Fishkin. (2011) When the people speak. Democratic deliberation and public consultancy. Publisher: Oxford University Press. ISBN 978-0199604432</ref>. In this context, people vote wrong if they vote for an option that is incapable of delivering the outcomes that they want.  


Ontologies are based on vocabularies with specified terms and meanings. Also the relations of terms are explicit. Resource description framework is based on the idea of triples, which have three parts: subject, predicate, and object. These can be thought as sentences: an item (subject) is related to (predicate) another item or value (object), thus forming a claim. Claims can further be specified using qualifiers and backed up by references. Such a block of information is called a statement. Extended causal diagrams can be described automatically as triples, and vice versa. Triple databases enable wide, decentralised linking of various sources and information. There is an open source solution for linking such a database directly to wiki platform. This software is called Wikibase.
If people are well-informed and have time and capability of considering different alternatives, the two premises lead to similar outcomes. However, recent policy research has shown that this prerequisite is often not met, and people can be and increasingly are being mislead, especially with modern microtargeting tools<ref name="ukparliament2019"/>. The need for protecting people and decision making from misleading information has been recognised.


The current version of the open policy practice ontology focusses on describing all information objects and terms described above, and making sure that there is a relevant item or relation to every piece of information that is described in an extended causal diagram, open assessment, or shared understanding. However, the strategy here is not to describe all possible information with such a structure, but only the critical part related to the decision situation.  
Public institutions such as independent justice system, free press, and honest civil servants provide protection against misleading activities and disruptive policies. These democratic institutions have deteriorated globally and in some countries particularly, even in places with good track record<ref>Freedom House. (2019) Freedom in the World 2019 https://freedomhouse.org/report/freedom-world/freedom-world-2019. Accessed 1 Feb 2020.</ref>.


For example, Figure 1 shows that the EU Parliament and Ministry Council have a role in fishing regulation, but this graphical representation of the triple does not attempt to tell anything else about these large and complex organisations, except that they have a role in the case and they make a decision about herring fishing intensity. Also, in Figure A1-1 in appendix 1 there are two scientific articles listed. The content of these articles is not described, because already with little effort (connecting an article identifier and a topic) the availability and usability of the article increases a lot. In contrast, a further step of structuring the article content into the causal diagram would take much more work and give less added value, unless someone identifies an important new statement in the article, making the effort worth it.
Destructive policies may be an effective way to inform stakeholders in a grim societal environment. Open policy practice may not be very effective in choosing the best alternative among good ones, but it is probably more effective in identifying and rejecting poor alternatives, i.e. destructive policies, which is often more important. This is expected to result in more stable and predictable policies. It is possible to focus on disseminating information about what actions especially should not be taken, why, and how it is known. In such discourse, the message can be practical, short, clear, and rationale is available for anyone interested. Practical experiments are needed to tell, whether this could reduce the support of destructive policies among the public.  


Thus, the aim is that any publication or another piece of new information would fairly easily and with little work find a proper place within this comples network of triples. The internal structure of the information within each information piece is only documented as necessary.
Further research is also needed to study other aspects of destructive policies: Can such policies be unambiguously recognised? Is shared understanding about them convincing enough among decision makers to change policies? Does it cause objections about science being biased and partisan? Does open policy practice prevent destructive policies from gaining political support?


For a full description of the current vocabulary in the ontology, see Appendix 1.
== Conclusions ==


== Results and evaluation ==
In conclusion, open policy practice works technically as expected. Open assessments can be performed openly online. They do not fail due to reasons many people think they will, namely low quality contributions, malevolent attacks or chaos caused by too many uninformed participants; these phenomena are very rare. Shared understanding has proved to be a useful concept that guides policy processes toward more collaborative approach, whose purpose is wider understanding rather than winning.
 
However, open policy practice has not been adopted in expert work or decision support as expected. A key hindrance has been that
the initial cost of learning and adopting new tools and practices has been higher than what an expert is willing to pay for participation in a single assessment, even if its impacts on the overall process are positive. The increased availability, acceptability, and inter-assessment efficiency have not yet been fully recognised by the scientific or policy community.
 
Active facilitation, community building and improving the user-friendliness of the tools were identified as key solutions in improving usability of the method in the future.
 
== List of abbreviations ==
 
* THL: Finnish Institute for Health and Welfare (government research institute in Finland)
* IEHIAS: Integrated Environmental Health Impact Assessment System (a website)
* RDF: resource description framework
 
== Declarations ==
 
*    Ethics approval and consent to participate: Not applicable
*    Consent for publication: Not applicable
*    Availability of data and materials: The datasets generated and/or analysed during the current study are available at the Opasnet repository, http://en.opasnet.org/w/Open_policy_practice
*    Competing interests: The authors declare that they have no competing interests.
*    Funding: This work resulted from the BONUS GOHERR project (Integrated governance of Baltic herring and salmon stocks involving stakeholders, 2015-2018) that was supported by BONUS (Art 185), funded jointly by the EU, the Academy of Finland and and the Swedish Research Council for Environment, Agricultural Sciences and Spatial Planning. Previous funders of the work: Centre of Excellence for Environmental Health Risk Analysis 2002-2007 (Academy of Finland), Beneris 2006-2009 (EU FP6 Food-CT-2006-022936), Intarese 2005-2011 (EU FP6 Integrated project in Global Change and Ecosystems, project number 018385), Heimtsa 2007-2011 EU FP6 (Global Change and Ecosystems project number GOCE-CT-2006-036913-2), Plantlibra 2010-2014 (EU FP7-KBBE-2009-3 project 245199), Urgenche 2011-2014 (EU FP7 Call FP7-ENV-2010 Project ID 265114), Finmerac 2006-2008 (Finnish Funding Agency for Innovation TEKES), Minera 2010-2013 (European Regional Development Fund), Scud 2005-2010 (Academy of Finland, grant 108571), Bioher 2008-2011 (Academy of Finland, grant 124306), Claih 2009-2012 (Academy of Finland, grant 129341), Yhtäköyttä 2015-2016 (Prime Minister's Office, Finland), Ympäristöterveysindikaattori 2018 (Ministry of Social Affairs and Health, Finland). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
*    Authors' contributions: JT and MP jointly developed the open assessment method and open policy practice. JT launched Opasnet web-workspace and supervised its development. TR developed OpasnetUtils software package from an original idea by JT and implemented several assessment models. All authors participated in several assessments and discussions about methods. JT wrote the first manuscript draft based on materials from MP and TR. All authors read and approved the final manuscript.
*    Acknowledgements: We thank Einari Happonen and Juha Villman for their work on developing Opasnet, and Juha Yrjölä and Tero Tikkanen for developing Climate Watch; and Arja Asikainen, John S. Evans, Alexanda Gens, Patrycja Gradowska, Päivi Haapasaari, Sonja-Maria Ignatius, Suvi Ignatius, Matti Jantunen, Anne Knol, Sami Majaniemi, Päivi Meriläinen, Kaisa Mäkelä, Raimo Muurinen, Jussi Nissilä, Juha Pekkanen, Mia Pihlajamäki, Teemu Ropponen, Kalle Ruokolainen, Simo Sarkki, Marko Tainio, Peter Tattersall, Hanna Tuomisto, Jouko Tuomisto, Matleena Tuomisto, and Pieta Tuomisto for crucial and inspiring discussions about methods and their implementation, and promoting these ideas on several forums.
 
== Endnotes ==
 
'''<sup>a</sup>''' This paper has its foundations on environmental health, but the idea of decision support necessarily looks at aspects seen relevant from the point of view of the decision maker, not from that of an expert in a particular field. Therefore, this article and also the method described are deliberately taking a wide view and covering all areas of expertise. However, all practical case studies have their main expertise needs in public health, and often specifically in environmental health. '''<sup>b</sup>''' Whenever this article presents a term in italic (e.g. ''open assessment''), it indicates that there is a page at the Opasnet web-workspace describing that term and that it can be accessed using a respective link (e.g. http://en.opasnet.org/w/Open_assessment). '''<sup>c</sup>''' Insight network was originally called ''pyrkilo'' (and at some point also ''extended causal diagram''). The word and concept pyrkilo was coined in 1997. In Finnish, pyrkilö means "an object or process that tends to produce or aims at producing certain kinds of products." The reasoning for using the word was that pyrkilo diagrams and related structured information such as models tend to improve understanding and thus decisions. The first wiki website was also called Pyrkilo, but the name was soon changed to Opasnet.
 
== References and notes ==
 
<references/>
 
== Figures and tables ==
 
Move them here for submission.
 
== Appendix S1: Open assessments performed ==
 
A number of open assessments have been performed in several research projects (see the funding declaration) and health assessments since 2004. Some assessments have also been done on international ''Kuopio Risk Assessment Workshops'' for doctoral students in 2007, 2008, and 2009 and on a Master's course ''Decision Analysis and Risk Management'' (6 credit points), organised by the University of Eastern Finland (previously University of Kuopio) in 2011, 2013, 2015, and 2017.
 
More assessments can be found at Opasnet page ''Category:Assessments''.


{| {{prettytable}}
{| {{prettytable}}
|+'''Table 8. Some environmental health assessments performed using open assessment. List of all assessments can be found on Opasnet page ''Category:Assessments''. References give links to both an assessment page and a scientific publication as applicable.
|+'''Table S1-1. Some environmental health assessments performed using open assessment. References give links to both an assessment page and a scientific publication as applicable.
|----
|----
! Topic
! Topic
! #
! Assessment
! Assessment
! Year
! Year
Line 455: Line 769:
|----
|----
| rowspan="2"|Vaccine effectiveness and safety
| rowspan="2"|Vaccine effectiveness and safety
| Assessment of the health impacts of H1N1 vaccination<ref>Assessment: http://en.opasnet.org/w/Assessment_of_the_health_impacts_of_H1N1_vaccination. Accessed 24 Jan 2018.</ref>  
| 1
| Assessment of the health impacts of H1N1 vaccination<ref>Assessment: http://en.opasnet.org/w/Assessment_of_the_health_impacts_of_H1N1_vaccination. Accessed 1 Feb 2020.</ref>  
| 2011
| 2011
| In-house, collaboration with Decision Analysis and Risk Management course
| In-house, collaboration with Decision Analysis and Risk Management course
|----
|----
| Tendering process for pneumococcal conjugate vaccine<ref>Assessment: http://en.opasnet.org/w/Tendering_process_for_pneumococcal_conjugate_vaccine. Accessed 24 Jan 2018.</ref>  
| 2
| Tendering process for pneumococcal conjugate vaccine<ref>Assessment: http://en.opasnet.org/w/Tendering_process_for_pneumococcal_conjugate_vaccine. Accessed 1 Feb 2020.</ref>  
| 2014
| 2014
| In-house, collaboration with the National Vaccination Expert Group
| In-house, collaboration with the National Vaccination Expert Group
|----
|----
| rowspan="5"|Energy production, air pollution and climate change
| rowspan="7"|Energy production, air pollution and climate change
| Helsinki energy decision<ref>Tuomisto JT, Rintala J, Ordén P, Tuomisto HM, Rintala T. Helsingin energiapäätös 2015. Avoin arviointi terveys-, ilmasto- ja muista vaikutuksista.  [Helsinki energy decision 2015. An open assessment on health, climate, and other impacts]. Helsinki: National Institute for Health and Welfare. Discussionpaper 24; 2015. http://urn.fi/URN:ISBN:978-952-302-544-8 Assessment: http://en.opasnet.org/w/Helsinki_energy_decision_2015. Accessed 24 Jan 2018.</ref>
| 3
| Helsinki energy decision<ref>Tuomisto JT, Rintala J, Ordén P, Tuomisto HM, Rintala T. Helsingin energiapäätös 2015. Avoin arviointi terveys-, ilmasto- ja muista vaikutuksista.  [Helsinki energy decision 2015. An open assessment on health, climate, and other impacts]. Helsinki: National Institute for Health and Welfare. Discussionpaper 24; 2015. http://urn.fi/URN:ISBN:978-952-302-544-8 Assessment: http://en.opasnet.org/w/Helsinki_energy_decision_2015. Accessed 1 Feb 2020.</ref>
| 2015
| 2015
| In-house, collaboration with city of Helsinki
| In-house, collaboration with city of Helsinki
|----
|----
| Climate change policies and health in Kuopio<ref>Asikainen A, Pärjälä E, Jantunen M, Tuomisto JT, Sabel CE. Effects of Local Greenhouse Gas Abatement Strategies on Air Pollutant Emissions and on Health in Kuopio, Finland. Climate 2017;5(2):43; doi:10.3390/cli5020043 Assessment: http://en.opasnet.org/w/Climate_change_policies_and_health_in_Kuopio. Accessed 24 Jan 2018.</ref>  
| 4
| Climate change policies and health in Kuopio<ref>Asikainen A, Pärjälä E, Jantunen M, Tuomisto JT, Sabel CE. Effects of Local Greenhouse Gas Abatement Strategies on Air Pollutant Emissions and on Health in Kuopio, Finland. Climate 2017;5(2):43; doi:10.3390/cli5020043 Assessment: http://en.opasnet.org/w/Climate_change_policies_and_health_in_Kuopio. Accessed 1 Feb 2020.</ref>  
| 2014
| 2014
| Urgenche, collaboration with city of Kuopio
| Urgenche, collaboration with city of Kuopio
|----
|----
| Climate change policies in Basel<ref>Tuomisto JT, Niittynen M, Pärjälä E, Asikainen A, Perez L, Trüeb S, Jantunen M, Künzli N, Sabel CE. Building-related health impacts in European and Chinese cities: a scalable assessment method. Environmental Health 2015;14:93. doi:10.1186/s12940-015-0082-z Assessment: http://en.opasnet.org/w/Climate_change_policies_in_Basel. Accessed 24 Jan 2018.</ref>  
| 5
| Climate change policies in Basel<ref>Tuomisto JT, Niittynen M, Pärjälä E, Asikainen A, Perez L, Trüeb S, Jantunen M, Künzli N, Sabel CE. Building-related health impacts in European and Chinese cities: a scalable assessment method. Environmental Health 2015;14:93. doi:10.1186/s12940-015-0082-z Assessment: http://en.opasnet.org/w/Climate_change_policies_in_Basel. Accessed 1 Feb 2020.</ref>  
| 2015
| 2015
| Urgenche, collaboration with city of Basel
| Urgenche, collaboration with city of Basel
|----
|----
| Availability of raw material for biodiesel production<ref name="sandström2014">Sandström V, Tuomisto JT, Majaniemi S, Rintala T, Pohjola MV. Evaluating effectiveness of open assessments on alternative biofuel sources. Sustainability: Science, Practice & Policy 2014;10;1. doi:10.1080/15487733.2014.11908132 Assessment: http://en.opasnet.org/w/Biofuel_assessments. Accessed 24 Jan 2018.</ref>
| 6
| Availability of raw material for biodiesel production<ref name="sandstrom2014">Sandström V, Tuomisto JT, Majaniemi S, Rintala T, Pohjola MV. Evaluating effectiveness of open assessments on alternative biofuel sources. Sustainability: Science, Practice & Policy 2014;10;1. doi:10.1080/15487733.2014.11908132 Assessment: http://en.opasnet.org/w/Biofuel_assessments. Accessed 1 Feb 2020.</ref>
| 2012
| 2012
| Jatropha, collaboration with Neste Oil
| Jatropha, collaboration with Neste Oil
|---
|---
| Health impacts of small scale wood burning<ref>Taimisto P, Tainio M, Karvosenoja N, Kupiainen K, Porvari P, Karppinen A, Kangas L, Kukkonen J, Tuomisto JT. Evaluation of intake fractions for different subpopulations due to primary fine particulate matter (PM2.5) emitted from domestic wood combustion and traffic in Finland. Air Quality Atmosphere and Health 2011;4:3-4:199-209. doi:10.1007/s11869-011-0138-3 Assessment: http://en.opasnet.org/w/BIOHER_assessment. Accessed 24 Jan 2018.</ref>  
| 7
| Health impacts of small scale wood burning<ref>Taimisto P, Tainio M, Karvosenoja N, Kupiainen K, Porvari P, Karppinen A, Kangas L, Kukkonen J, Tuomisto JT. Evaluation of intake fractions for different subpopulations due to primary fine particulate matter (PM2.5) emitted from domestic wood combustion and traffic in Finland. Air Quality Atmosphere and Health 2011;4:3-4:199-209. doi:10.1007/s11869-011-0138-3 Assessment: http://en.opasnet.org/w/BIOHER_assessment. Accessed 1 Feb 2020.</ref>  
| 2011
| 2011
| Bioher, Claih
| Bioher, Claih
|----
| 8
| Climate strategy of Helsinki: Carbon neutral Helsinki 2035 action plan<ref name="hnh2035">City of Helsinki. The Carbon-neutral Helsinki 2035 Action Plan. Publications of the Central Administration  of the City of Helsinki 2018:4. http://carbonneutralcities.org/wp-content/uploads/2019/06/Carbon_neutral_Helsinki_Action_Plan_1503019_EN.pdf Assessment: https://ilmastovahti.hel.fi. Accessed 1 Feb 2020.</ref>
| 2018
| In-house, collaboration with city of Helsinki
|----
| 9
| Climate mitigation of the social affairs and health sector in Finland<ref>Tuomisto JT. (2020) Climate emissions of the social affairs and health sector in Finland and potential mitigation actions. Assessment: https://hnpolut.dokku.teamy.fi. Accessed 1 Feb 2020</ref>
| 2020
| In-house, commissioned by the Prime Minister
|----
|----
| rowspan="2"|Health, climate, and economic effects of traffic  
| rowspan="2"|Health, climate, and economic effects of traffic  
| Gasbus - health impacts of Helsinki bus traffic<ref>Tainio M, Tuomisto JT, Hanninen O, Aarnio P, Koistinen, KJ, Jantunen MJ, Pekkanen J. Health effects caused by primary fine particulate matter (PM2.5) emitted from buses in the Helsinki metropolitan area, Finland. RISK ANALYSIS 2005;25:1:151-160. Assessment: http://en.opasnet.org/w/Gasbus_-_health_impacts_of_Helsinki_bus_traffic. Accessed 24 Jan 2018.</ref>
| 10
| Gasbus - health impacts of Helsinki bus traffic<ref>Tainio M, Tuomisto JT, Hanninen O, Aarnio P, Koistinen, KJ, Jantunen MJ, Pekkanen J. Health effects caused by primary fine particulate matter (PM2.5) emitted from buses in the Helsinki metropolitan area, Finland. RISK ANALYSIS 2005;25:1:151-160. Assessment: http://en.opasnet.org/w/Gasbus_-_health_impacts_of_Helsinki_bus_traffic. Accessed 1 Feb 2020.</ref>
| 2004
| 2004
| Collaboration with Helsinki Metropolitan Area
| Collaboration with Helsinki Metropolitan Area
|----
|----
| Cost-benefit assessment on composite traffic in Helsinki<ref name="tuomisto2005">Tuomisto JT; Tainio M. An economic way of reducing health, environmental, and other pressures of urban traffic: a decision analysis on trip aggregation. BMC PUBLIC HEALTH 2005;5:123. http://biomedcentral.com/1471-2458/5/123/abstract Assessment: http://en.opasnet.org/w/Cost-benefit_assessment_on_composite_traffic_in_Helsinki. Accessed 24 Jan 2018.</ref>  
| 11
| Cost-benefit assessment on composite traffic in Helsinki<ref name="tuomisto2005">Tuomisto JT; Tainio M. An economic way of reducing health, environmental, and other pressures of urban traffic: a decision analysis on trip aggregation. BMC PUBLIC HEALTH 2005;5:123. http://biomedcentral.com/1471-2458/5/123/abstract Assessment: http://en.opasnet.org/w/Cost-benefit_assessment_on_composite_traffic_in_Helsinki. Accessed 1 Feb 2020.</ref>  
| 2005
| 2005
| In-house
| In-house
|----
|----
| rowspan="5"|Risks and benefits of fish eating
| rowspan="5"|Risks and benefits of fish consumption
| Benefit-risk assessment of Baltic herring in Finland<ref>Tuomisto JT, Niittynen M, Turunen A, Ung-Lanki S, Kiviranta H, Harjunpää H, Vuorinen PJ, Rokka M, Ritvanen T, Hallikainen A. Itämeren silakka ravintona – Hyöty-haitta-analyysi. [Baltic herring as food - a benefit-risk analysis] ISBN 978-952-225-141-1. Helsinki: Eviran tutkimuksia 1; 2015 (in Finnish). Assessment: http://fi.opasnet.org/fi/Silakan_hy%C3%B6ty-riskiarvio. Accessed 24 Jan 2018.</ref>  
| 12
| Benefit-risk assessment of Baltic herring in Finland<ref>Tuomisto JT, Niittynen M, Turunen A, Ung-Lanki S, Kiviranta H, Harjunpää H, Vuorinen PJ, Rokka M, Ritvanen T, Hallikainen A. Itämeren silakka ravintona – Hyöty-haitta-analyysi. [Baltic herring as food - a benefit-risk analysis] ISBN 978-952-225-141-1. Helsinki: Eviran tutkimuksia 1; 2015 (in Finnish). Assessment: http://fi.opasnet.org/fi/Silakan_hy%C3%B6ty-riskiarvio. Accessed 1 Feb 2020.</ref>  
| 2015
| 2015
| Collaboration with Finnish Food Safety Authority
| Collaboration with Finnish Food Safety Authority
|----
|----
| Benefit-risk assessment of methyl mercury and omega-3 fatty acids in fish<ref>Leino O, Karjalainen AK, Tuomisto JT. Effects of docosahexaenoic acid and methylmercury on child's brain development due to consumption of fish by Finnish mother during pregnancy: A probabilistic modeling approach. Food Chem Toxicol. 2013;54:50-8. doi:10.1016/j.fct.2011.06.052. Assessment: http://en.opasnet.org/w/Benefit-risk_assessment_of_methyl_mercury_and_omega-3_fatty_acids_in_fish. Accessed 24 Jan 2018.</ref>  
| 13
| Benefit-risk assessment of methyl mercury and omega-3 fatty acids in fish<ref>Leino O, Karjalainen AK, Tuomisto JT. Effects of docosahexaenoic acid and methylmercury on child's brain development due to consumption of fish by Finnish mother during pregnancy: A probabilistic modeling approach. Food Chem Toxicol. 2013;54:50-8. doi:10.1016/j.fct.2011.06.052. Assessment: http://en.opasnet.org/w/Benefit-risk_assessment_of_methyl_mercury_and_omega-3_fatty_acids_in_fish. Accessed 1 Feb 2020.</ref>  
| 2009
| 2009
| Beneris
| Beneris
|----
|----
| Benefit-risk assessment of fish consumption for Beneris<ref>Gradowska PL. Food Benefit-Risk Assessment with Bayesian Belief Networks and Multivariable Exposure-Response. Delft: Delft University of Technology (doctoral dissertation); 2013. https://repository.tudelft.nl/islandora/object/uuid:9ced4cb2-9809-4b58-af25-34e458e8ea23/datastream/OBJ Assessment: http://en.opasnet.org/w/Benefit-risk_assessment_of_fish_consumption_for_Beneris. Accessed 24 Jan 2018.</ref>  
| 14
| Benefit-risk assessment of fish consumption for Beneris<ref>Gradowska PL. Food Benefit-Risk Assessment with Bayesian Belief Networks and Multivariable Exposure-Response. Delft: Delft University of Technology (doctoral dissertation); 2013. https://repository.tudelft.nl/islandora/object/uuid:9ced4cb2-9809-4b58-af25-34e458e8ea23/datastream/OBJ Assessment: http://en.opasnet.org/w/Benefit-risk_assessment_of_fish_consumption_for_Beneris. Accessed 1 Feb 2020.</ref>  
| 2008
| 2008
| Beneris
| Beneris
|----
|----
| Benefit-risk assessment on farmed salmon<ref>Tuomisto JT, Tuomisto J, Tainio M, Niittynen M, Verkasalo P, Vartiainen T et al. Risk-benefit analysis of eating farmed salmon. Science 2004;305(5683):476. Assessment: http://en.opasnet.org/w/Benefit-risk_assessment_on_farmed_salmon. Accessed 24 Jan 2018.</ref>  
| 15
| Benefit-risk assessment on farmed salmon<ref>Tuomisto JT, Tuomisto J, Tainio M, Niittynen M, Verkasalo P, Vartiainen T et al. Risk-benefit analysis of eating farmed salmon. Science 2004;305(5683):476. Assessment: http://en.opasnet.org/w/Benefit-risk_assessment_on_farmed_salmon. Accessed 1 Feb 2020.</ref>  
| 2004
| 2004
| In-house
| In-house
|----
|----
| Benefit-risk assessment of Baltic herring and salmon intake<ref name="goherr2018">Assessment: http://en.opasnet.org/w/Benefit-risk_assessment_of_Baltic_herring_and_salmon_intake. Accessed 24 Jan 2018.</ref>  
| 16
| Benefit-risk assessment of Baltic herring and salmon intake<ref name="goherr2020">Tuomisto JT, Asikainen A, Meriläinen P et Haapasaari P. Health effects of nutrients and environmental pollutants in Baltic herring and salmon: a quantitative benefit-risk assessment. BMC Public Health 20, 64 (2020). https://doi.org/10.1186/s12889-019-8094-1 Assessment: http://en.opasnet.org/w/Goherr_assessment, data archive: https://osf.io/brxpt/. Accessed 1 Feb 2020.</ref>  
| 2018
| 2018
| BONUS GOHERR
| BONUS GOHERR
|----
|----
| rowspan="2"| Dioxins, fine particles  
| rowspan="2"| Dioxins, fine particles  
| 17
| TCDD: A challenge to mechanistic toxicology<ref name="tuomisto1999">Tuomisto JT. TCDD: a challenge to mechanistic toxicology [Dissertation]. Kuopio: National Public Health Institute A7; 1999.</ref>
| TCDD: A challenge to mechanistic toxicology<ref name="tuomisto1999">Tuomisto JT. TCDD: a challenge to mechanistic toxicology [Dissertation]. Kuopio: National Public Health Institute A7; 1999.</ref>
| 1999
| 1999
| EC ENV4-CT96-0336
| EC ENV4-CT96-0336
|----
|----
| Comparative risk assessment of dioxin and fine particles<ref>Leino O, Tainio M, Tuomisto JT. Comparative risk analysis of dioxins in fish and fine particles from heavy-duty vehicles. Risk Anal. 2008;28(1):127-40. Assessment: http://en.opasnet.org/w/Comparative_risk_assessment_of_dioxin_and_fine_particles. Accessed 24 Jan 2018.</ref>
| 18
| Comparative risk assessment of dioxin and fine particles<ref>Leino O, Tainio M, Tuomisto JT. Comparative risk analysis of dioxins in fish and fine particles from heavy-duty vehicles. Risk Anal. 2008;28(1):127-40. Assessment: http://en.opasnet.org/w/Comparative_risk_assessment_of_dioxin_and_fine_particles. Accessed 1 Feb 2020.</ref>
| 2007
| 2007
| Beneris
| Beneris
|----
|----
| Plant-based food supplements|| Compound intake estimator<ref>Assessment: http://en.opasnet.org/w/Compound_intake_estimator. Accessed 24 Jan 2018.</ref>  
| Plant-based food supplements
| 19
| Compound intake estimator<ref>Assessment: http://en.opasnet.org/w/Compound_intake_estimator. Accessed 1 Feb 2020.</ref>  
| 2014
| 2014
| Plantlibra
| Plantlibra
|----
|----
| rowspan="3"| Health and ecological risks of mining
| rowspan="3"| Health and ecological risks of mining
| 20
| Paakkila asbestos mine<ref name="paakkila1999">Tuomisto JT, Pekkanen J, Alm S, Kurttio P, Venäläinen R, Juuti S et al. Deliberation process by an explicit factor-effect-value network (Pyrkilo): Paakkila asbestos mine case, Finland. Epidemiol 1999;10(4):S114.</ref>
| Paakkila asbestos mine<ref name="paakkila1999">Tuomisto JT, Pekkanen J, Alm S, Kurttio P, Venäläinen R, Juuti S et al. Deliberation process by an explicit factor-effect-value network (Pyrkilo): Paakkila asbestos mine case, Finland. Epidemiol 1999;10(4):S114.</ref>
| 1999
| 1999
| In-house
| In-house
|----
|----
| Model for site-specific health and ecological assessments in mines<ref>Kauppila T, Komulainen H, Makkonen S, Tuomisto JT, editors. Metallikaivosalueiden ympäristöriskinarviointiosaamisen kehittäminen: MINERA-hankkeen loppuraportti. [Summary: Improving Environmental Risk Assessments for Metal Mines: Final Report of the MINERA Project.] Helsinki: Geology Survey Finland, Research Report 199; 2013. 223 p. ISBN 978-952-217-231-0. Assessment: http://fi.opasnet.org/fi/Minera-malli. Accessed 24 Jan 2018.</ref>  
| 21
| Model for site-specific health and ecological assessments in mines<ref>Kauppila T, Komulainen H, Makkonen S, Tuomisto JT, editors. Metallikaivosalueiden ympäristöriskinarviointiosaamisen kehittäminen: MINERA-hankkeen loppuraportti. [Summary: Improving Environmental Risk Assessments for Metal Mines: Final Report of the MINERA Project.] Helsinki: Geology Survey Finland, Research Report 199; 2013. 223 p. ISBN 978-952-217-231-0. Assessment: http://fi.opasnet.org/fi/Minera-malli. Accessed 1 Feb 2020.</ref>  
| 2013
| 2013
| Minera
| Minera
|----
|----
| Risks of water from mine areas <ref>Assessment: http://fi.opasnet.org/fi/Kaivosvesien_riskit_(KAVERI-malli). Accessed 24 Jan 2018.</ref>
| 22
| Risks of water from mine areas <ref>Assessment: http://fi.opasnet.org/fi/Kaivosvesien_riskit_(KAVERI-malli). Accessed 1 Feb 2020.</ref>
| 2018
| 2018
| Kaveri
| Kaveri
|----
|----
| Drinking water safety  
| rowspan="2"| Water safety  
| Water guide<ref>Assessment: http://en.opasnet.org/w/Water_guide. Accessed 24 Jan 2018.</ref>  
| 23
| Water Guide for assessing health risks of drinking water contamination<ref>Assessment: http://en.opasnet.org/w/Water_guide. Accessed 1 Feb 2020.</ref>  
| 2013
| 2013
| Conpat
| Conpat
|----
| 24
| Bathing Water Guide for assessing health risks of bathing water contamination<ref>Assessment: http://en.opasnet.org/w/Bathing_water_guide. Accessed 1 Feb 2020.</ref>
| 2019
| Water Guide update
|----
|----
| rowspan="2"|Organisational assessments  
| rowspan="2"|Organisational assessments  
| 25
| Analysis and discussion about research strategies or organisational changes within THL  
| Analysis and discussion about research strategies or organisational changes within THL  
| 2017
| 2017
| In-house
| In-house
|----
|----
| 26
| Transport and communication strategy in digital Finland<ref>Liikenne ja viestintä digitaalisessa  
| Transport and communication strategy in digital Finland<ref>Liikenne ja viestintä digitaalisessa  
Suomessa. Liikenne- ja viestintäministeriön tulevaisuuskatsaus 2014 [Transport and and communication in digital Finland] Helsinki: Ministry of Transport and Communication; 2014. http://urn.fi/URN:ISBN:978-952-243-420-3 Assessment: http://fi.opasnet.org/fi/Liikenne_ja_viestint%C3%A4_digitaalisessa_Suomessa_2020. Accessed 24 Jan 2018.</ref>  
Suomessa. Liikenne- ja viestintäministeriön tulevaisuuskatsaus 2014 [Transport and and communication in digital Finland] Helsinki: Ministry of Transport and Communication; 2014. http://urn.fi/URN:ISBN:978-952-243-420-3 Assessment: http://fi.opasnet.org/fi/Liikenne_ja_viestint%C3%A4_digitaalisessa_Suomessa_2020. Accessed 1 Feb 2020.</ref>  
| 2014
| 2014
| Collaboration with the Ministry of Transport and Communications of Finland
| Collaboration with the Ministry of Transport and Communications of Finland
|----
|----
| Information use in government decision support|| Case studies: Assessment of immigrants' added value; Real-time co-editing, Fact-checking, Information design<ref>Tuomisto JT, Muurinen R, Paavola J-M, Asikainen A, Ropponen T, Nissilä J. Tiedon sitominen päätöksentekoon. [Binding knowledge to decision making] Helsinki: Publications of  the Government's analysis, assessment and research activities 39; 2017. ISBN 978-952-287-386-6 http://tietokayttoon.fi/julkaisu?pubid=19001. Assessment: http://fi.opasnet.org/fi/Maahanmuuttoarviointi. Accessed 24 Jan 2018.</ref>   
| rowspan="2"|Information use in government or municipality decision support
| 27
| Case studies: Assessment of immigrants' added value; Real-time co-editing, Fact-checking, Information design<ref>Tuomisto JT, Muurinen R, Paavola J-M, Asikainen A, Ropponen T, Nissilä J. Tiedon sitominen päätöksentekoon. [Binding knowledge to decision making] Helsinki: Publications of  the Government's analysis, assessment and research activities 39; 2017. ISBN 978-952-287-386-6 http://tietokayttoon.fi/julkaisu?pubid=19001. Assessment: http://fi.opasnet.org/fi/Maahanmuuttoarviointi. Accessed 1 Feb 2020.</ref>   
| 2016
| 2016
| Yhtäköyttä, collaboration with Prime Miniter's Office
| Yhtäköyttä, collaboration with Prime Minister's Office
|----
|----
| 28
| Evaluation of forest strategy process for Puijo, Kuopio<ref>Kajanus M, Ollikainen T, Partanen J, Vänskä I. Kävijätutkimukseen perustuva Puijon virkistysmetsien hoito- ja käyttösuunnitelma. [Forest strategy for recreational forests at Puijo, Kuopio, based on visitor study.] (in Finnish) Kuopion kaupunki, Metsätoimisto; 2010. http://fi.opasnet.org/fi-opwiki/images/8/8a/Puijo-loppuraportti.pdf. Assessment: http://fi.opasnet.org/fi/Puijon_metsien_k%C3%A4ytt%C3%B6suunnitelman_p%C3%A4%C3%A4t%C3%B6ksenteko Accessed 1 Feb 2020.</ref>
|2012
|In-house
|----
| Indicator development
| 29
| Environmental health indicators in Finland<ref>Tuomisto JT, Asikainen A, Korhonen A, Lehtomäki H. Teemasivu ympäristöterveys [Portal: Environmental health]. A website, THL, 2018. [http://fi.opasnet.org/fi/Teemasivu:Ymp%C3%A4rist%C3%B6terveys]</ref>
| 2018
| Ympäristöterveysindikaattori
|----
| Structuring discussions
| 30
| Developing and testing tools and practices for structured argumentation<ref>Hastrup T. Knowledge crystal argumentation tree. https://dev.tietokide.fi/?Q10. Web tool. Accessed 1 Feb 2020.</ref>
| 2019
| Citizen Crystal
|----
| Food safety and diet
| 31
| Health risks of chemical and microbial contaminants and dietary factors in food in Finland<ref>
Suomi J, Haario, P et al. Costs and Risk Assessment of the Health Effects of the Food System. Publications of the Government´s analysis, assessment and research activities 2019:64. http://urn.fi/URN:ISBN:978-952-287-797-0. Accessed 1 Feb 2020.</ref>
| 2019
| Ruori, collaboration with e.g. Ministry of Agriculture and Prime Minister's Office
|}
|}


The methods described above have been used in several research projects (see the funding declaration) and health assessments (some mentioned in Table 8) since 2004. They have also been taught on international ''Kuopio Risk Assessment Workshops'' for doctoral students in 2007, 2008, and 2009 and on a Master's course ''Decision Analysis and Risk Management'' (6 credit points), organised by the University of Eastern Finland (previously University of Kuopio) in 2011, 2013, 2015, and 2017.
=== References for assessments ===


As methods and tools were developed side by side with practical assessment work, there is extensive experience about some parts of the method. Some newer parts (e.g. value profiles) are merely ideas with no practical implementation yet. This evaluation is based on the experience accumulated during the scientific, expert, and teaching work. We will follow the properties of good assessment and apply them to the method itself.
<references/>


=== Quality of content ===
== Appendix S2: Examples of insight networks ==


Open policy practice does not restrict the use of any previous methods that are necessary for successful decision support. Therefore, it is safe to say that the quality of the produced information is at least the same as with some other method. We have noticed in numerous cases that the structures offered for e.g. assessments or knowledge crystals help in organising and understanding the content.
<gallery widths=400 heights=300>
File:Why dioxin is a problem.svg|Figure S2-1. Insight network about dioxins in Baltic fish. The focus is on reasoning and value judgements and their connections to causal chains about dioxins and health.
File:Legend for extended causal diagrams.svg|Figure S2-2. Legend for main object types used in insight networks. The actual colours and formatting depend on the capabilities of the software used.
File:Open policy ontology network items.png|Figure S2-3. Items in open policy ontology shown in an insight network format. All relations are of types 'has subclass' or 'has part'.
File:Open policy ontology network relations.png|Figure S2-4. Relations in open policy ontology shown in an insight network format. In this graph, relations are shown as nodes, and the arcs between relations are of types 'has subclass', 'has part', or 'inverse'.
File:Risks of open government.svg|Figure S2-5. Structured discussion about risks of open governance. The original discussion was held in Finnish and can be found from http://fi.opasnet.org/fi/Keskustelu:Jaettu_ymm%C3%A4rrys. Trapezoids are statements. Light blue is an opening fact statement ('Openness causes serious problems to the quality of policy making.'), and blue is closing fact statement that is updated based on the discussion ('Openness causes serious problems to the quality of policy making, if it is too tightly connected to impulsive thinking in social media.'). Orange arguments are true and gray are false. Red arrow is an attack, green is a defense, and gray is irrelevant (Accessed 1 Feb 2020).
File:Structured discussion on an argumentation tool.png|Figure S2-6. Structured discussion at an argumentation tool (https://dev.tietokide.fi/?Q10. Accessed 1 Feb 2020)
File:Helsinki energy decision 2015.png|Figure S2-6. Insight network about the assessment model for Helsinki energy decision 2015.
</gallery>


Criticism is emphasised in open policy practice as an integral part of the scientific method. Giving critique is made as easy as possible. Surprisingly, we still see fairly little of it in practical assessments. There seem to be several reasons: experts have little time to actually read other people's assessment and give detailed comments; people are reluctant to interfere with other people's work even within a joint project, so they rather keep to a strict division of tasks; there are no rewards or incentives for giving critique; and when an assessment spans several web pages, it is not clear where and how to contribute.
== Appendix S3: Open policy ontology ==


We have seen lack of criticism even in vaccine-related assessments that are potentially emotive. With active facilitation we were able to get comments and critique from both drug industry and vaccine citizen organisations, and they were all very matter-of-fact. This was interesting, as the same topics cause outrage in social media, but we did not see that on structured assessments. However, one of the most common objections and fears against open assessment is that outside contributions will be ill-informed and malevolent. They have never been in our assessments.
[[Shared understanding]] aims at producing a description of different views, opinions, and facts related to a specific topic such as a decision process. The open policy ontology describes the information structures that are needed to document shared understanding of a complex decision situation. The purpose of the structure is to help people identify hidden premises, beliefs, and values and explicate possible discrepancies. This is expected to produce better understanding among participants.  


Lack of contributions limits the amount of new views and ideas that potentially could be identified with open participation. However, even if we do not see all potential of criticism, we don't think that the lack of open critique would hamper the quality of assessments, because all the common practices of e.g. experts' source-checking are still in place. Actually, the current practices in research and assessment are even worse in respect of open criticism: it rarely happens. Pre-publishing peer review is almost the only time when scientific work is criticised by people outside a research group, and those are typically not open. A minute fraction of published works are criticised openly in journals; a poor work is simply left alone and forgotten.  
The basic structure of a shared understanding is a network of items and relations between them. This network uses [[:en:Resource description framework|Resource description framework]], which is an ontology standard used to describe many Internet contents. Items and relations (aka properties) are collectively called resources. Each item is typically of one of the types mentioned below. This information is documented using property '''[https://www.wikidata.org/wiki/Property:P31 instance of]''' (e.g. [[Goherr assessment]] is instance of assessment).  


However, there are some active platforms for scientific discussion, such as pre-publishing physics forum ArXiv.org, and similar platforms are emerging on other disciplines. Such practice suits to continually updating knowledge crystals much better than what is typically done in most research areas. Open discussion is increasing, and knowledge crystals are one way to facilitate this positive trend.
Items are written descriptions of the actual things (people, tasks, publications, or phenomena), and on this page these descriptions rather than the actual things are discussed. Different item types have different levels of standardisation and internal structure. For example, [[knowledge crystal]]s are web pages that always have headings question, answer and rationale, and the information is organised under those headings. Some other items describe e.g. statements that are free-text descriptions about how a particular thing is or should be (according to a participant), and yet some others are metadata about publications. A common feature is that all items contain information that is relevant for a decision.


=== Relevance ===
In the open policy ontology, each item may have lengthy texts, graphs, analyses or even models inside them. However, the focus here is on how the items are related to each other. The actual content is often referred to as one key sentence only (description). Each item also has a unique identifier URI that is used for automatic handling of data.


A major issue with relevance is that the communication between decision makers and experts is not optimal, and therefore experts don't have good knowledge about what questions need answers and how the information should be provided<ref name="jussila2012"/>. Also, decision makers prefer information that supports views that have already been selected on political grounds.
The most important items are [[knowledge crystal]]s and they are described here.
* '''[[Assessment]]''' describes a particular decision situation and focuses on estimating impacts of different options. Its purpose is to support the making of that decision. Unlike other knowledge crystals, assessments typically have a defined start and end dates and they are closed after the decision is made. They also have contextually and situationally defined goals`to be able to better serve the needs of the decision makers of the decision.
* '''[[Variable]]''' answers a particular factual or ethical question that is typically needed in one or more assessments. The answer of a variable is continually updated as new information arises, but its question remains constant in time. Variable is the basic building block of assessments. In R, variables are typically implemented using ovariable objects from OpasnetUtils package.
* '''[[Method]]''' tells how to systematically implement a particular information task. Method is the basic building block for describing the assessment work (not reality, like variables). In practice, methods are "how-to-do" descriptions about how information should be produced, collected, analysed, or synthesised in an assessment. Typically, methods contain a software code or another algorithm to actually perform the method easily. In R, methods are typically ovariables that require some context-specific upstream information about dependencies before it can be calculated.


In open policy practice, the questions and structures aim to explicate relevant questions, so that experts get a clear picture what information is expected and needed. They also focus discussions and assessment work to questions that have been identified as relevant. On the other hand, shared understanding and other explicit documents of relevant information make it harder for a decision maker to pick only favourable pieces of information, at least if there is political pressure. The pneumococcal vaccine assessment (see Table 8) had clear political pressure, and it would have been difficult for the decision maker to deviate from the conclusions of the assessment. However, typically assessments are taken as just one piece of information rather than a normative guidance; this clearly depends on political culture, how strong the commitment to evidence-based decision making is, and how well an assessment succeeds in incorporating all relevant views.
There are also other important classes of items:
* '''Publication''' is any documentation that contains useful information related to a decision. Publications that are commonly used at Opasnet include encyclopedia article, lecture, nugget, and study. Other publications at Opasnet are typically uploaded as files.
** '''[[Encyclopedia article]]''' is an object that describes a topic like in Wikipedia rather than answers a specific research question. They do not have a predefined attribute structure.
** '''[[Lecture]]''': Lecture contains a piece of information that is to be mediated to a defined audience and with a defined learning objective. It can also be description of a process during which the audience learns, instead of being a passive recipient of information.
** '''[[Nugget]]''' is an object that is not editable by other people than a dedicated author (group) and is not expected to be updated once finalised. They do not have a predefined attribute structure.
** '''[[Study]]''' describes a research study and its answers, i.e. observational or other data obtained in the study. The research questions are described as the question of the information object, and the study methods are described as the rationale of the object. Unlike in an article, introduction or discussion may be missing, and unlike in a variable, the answer and rationale of the study are more or less fixed after the work is done; this is because the interpretations of the results typically happen elsewhere, e.g. in variables for which a study contains useful information.
* '''[[Discussion]]''' is a hierarchically structured documentation of a discussion about a defined statement or statements.
* '''Stakeholder''' page is used to describe a person or group that is relevant for a decision or decision process; they may be an actor that has an active role in decision making or is a target of impacts. Contributors of Opasnet are described on their own user pages; other stakeholders may have their page on the main namespace.
* '''Process''' describes elements of a decision process.
* '''Action''' describes what, who and when should act to e.g. perform an assessment, make a decision, or implement policies.


Shared understanding and detailed assessments challenge participants to clarify what they mean and what is found relevant. For example, the composite traffic and biodiesel assessments were already quite developed, when discrepancies between objectives, data, and draft conclusions forced rethinking about the purposes of the assessments. In both cases, the main research questions were adjusted, and more relevant assessments and conclusions were produced. Political questions and arguments are also expected to clarify in a similar way when more of it is incorporated into systematic scrutiny.
Relations show different kinds of connections between items.
* '''Causal link''' tells that the subject may change the object (e.g. affects, increases, decreases, prevents).
* '''Participatory link''' describes a stakeholder's particular role related to the object (participates, negotiates, decides).
* '''Operational link''' tells that the subject has some kind of practical relation to the object (executes, offers, tells).
* '''Evaluative link''' tells that the subject shows preference or relevance about the object (has truthlikeness, value, popularity, finds important).
* '''Referential link''' tells that the object is used as a reference of a kind for the subject (makes relevant; associates to; has reference, has tag, has category).
* '''Argumentative link''' occurs between statements that defend or attack each other (attack, defend, comment).
* '''Property link''' connects an evaluative (acceptability, usability), a logical (opposite, inverse) or set theory (has subclass, has part) property to the subject.


=== Availability ===
=== Item types ===


A typical problem with availability is that a piece of information is designed for a specific user group and made available in a place targeted to that group. In contrast, open policy practice is designed with an idea that anyone can be a reader or contributor in a user group, so by default everything is made openly available on the Internet. To our experience, such approach works well in practice as long as there are seamless links to also repositories for non-publishable data and contributors know the open source tools such as wiki and R. Such openness is, however, a major perceived problem to many experts; this issue is discussed more under acceptability.
This ontology is specifically about decision making, and therefore actions (and decisions to act) are handled explicitly. However, any natural, social, ethical or other phenomena may relate to a decision and therefore the vocabulary has to be very generic.


Another problem is that even if users find an assessment page, they are unsure about its status and whether some information is missing. This is because many pages are work in progress and not finalised for end users. We have tried to clarify this by adding status declarations on the tops of pages. Declaring drafts as drafts has also helped experts who are uncomfortable to show their own work before it is fully complete.
{| {{prettytable}}
 
|+'''Table S3-1. Item types used in open policy ontology.
The information structure using knowledge crystals and one topic per page has proven a good one. Usually it is easy to find the page of interest even if there is only a vague idea of its name or specific content. Also, it is mostly straightforward to see which information belongs to which page.  
|----
 
! Class|| English name|| Finnish name|| Description
There are some comments about not being able to find pages in Opasnet, but to our experience these problems are surpassed by benefits for people being able to easily find detailed material with e.g. search engines without prior knowledge about Opasnet. The material seems to be fairly well received, as there were 52000 visits to and 90000 pageviews of the Finnish and English Opasnet websites in 2017. The most interesting topics seemed to be ecological and health impacts of mining and drinking water safety. Also the pneumococcus vaccine assessment, Helsinki energy decision, and transport and communication strategy in digital Finland were popular pages when they were prepared.
|----
 
|| || resource|| resurssi|| All items and relations are resources
Lack of participation among decision makers, stakeholders, and experts outside the assessment team is a constant problem in making existing information available. Assessments are still seen as a separate part from the decision process, and the idea that scientific assessments could contain value judgements from the public is unprecedented. The closest resemblance are environmental impact assessments, where the law requires public hearings, but many people are sceptic about the influence of comments given. Also Wikipedia has noticed that only a few percent of readers ever contribute, and the number of active contributors is even lower<ref name="wikipedians">Wikipedia: Wikipedians. https://en.wikipedia.org/wiki/Wikipedia:Wikipedians. Accessed 24 Jan 2018</ref>.
|----
 
|| resource|| item|| asia|| Relevant pieces of information related policy making. Sometimes also refers to the real-life things that the information is about. Items are shown as nodes in insight networks.
Experts are a special group of interest, as they possess vast amounts of relevant information that is not readily available to decision makers. Yet, we have found that experts are not easily motivated to policy support work.
|----
 
|| resource|| relation|| relaatio|| Information about how items are connected to each other. Relations are shown as edges in insight networks.
This emphasises the need for facilitation and active invitations for people to express their views. Importantly, direct online participation is not an objective as such but one way among others to collect views and ideas. It is also more important to cover all major ideas than to represent every person individually. In any case, there is a clear need to inform people about new possibilities for participation in societal decisions. We have found it very useful to simply provide links to ongoing projects to many different kinds of user groups across and outside organisations.
|----
|| item|| substance|| ilmiö|| Items about a substantive topic or phenomenon itself: What issues relate to a decision? What causal connections exist between issues? What scientific knowledge exist about the issues? What actions can be chosen? What are the impacts of these actions? What are the objectives and how can they be reached? What values and preferences exist?
|----
|| item|| stakeholder|| sidosryhmä|| Items about people or organisations who have a particular role in a policy process, either as actors or targets of impacts: Who participates in a policy process? Who should participate? Who has necessary skills for contributing? Who has the authority to decide? Who is affected by a decision?
|----
|| item|| process|| prosessi|| Items about doing or happening in relation with a topic, especially information about how a decision will be made): What will be decided? When will it be decided? How is the decision prepared? What political realities and restrictions exist?
|----
|| item|| action|| toiminta|| Items about organising decision support (impact assessment, decision making, implementation, and evaluation): What tasks are needed to collect and organise necessary information? How is information work organised? How and when are decisions implemented? Actions are also important afterwards to distribute merit and evaluate the process: Who did what? How did information evolve and by whom?
|----
|| item|| information object|| tieto-olio|| A specified structure containing information about substance, stakeholders, processes, methods, or actions.
|----
|| information object|| knowledge crystal|| tietokide|| information object with a standardised structure and contribution rules
|----
|| knowledge crystal|| assessment|| arviointi|| Describes a decision situation and typically provides relevant information to decision makers before the decision is made (or sometimes after the decision about its implementation or success). It is mostly about the knowledge work, i.e. tasks for decision support.
|----
|| knowledge crystal|| variable|| muuttuja|| Describes a real-world topic that is relevant for the decision situation. It is about the substance of the topic.
|----
|| knowledge crystal|| method|| metodi|| Describes how information should be managed or analysed so that it answers the policy-relevant questions asked. How to perform information work? What methods are available for a task? How to participate in a decision process? How to use statistical and other methods and tools? How to motivate participation? How to measure merit of contributions?
|----
|| information object|| discussion part|| keskustelun osa|| Information object that is used to organise discussions into a specified structure. The purpose of the structure is to help validation of statements and facilitate machine learning.
|----
|| information object|| discussion|| keskustelu|| Discussion, or structured argumentation, describes arguments about a particular statement and a synthesis about an acceptable statement. In a way, discussion is (a documentation of) a process of analysing the validity of a statement.
|----
|| discussion|| fact discussion|| faktakeskustelu|| Discussion that can be resolved based on scientific knowledge.
|----
|| discussion|| value discussion|| arvokeskustelu|| Discussion that can be resolved based on ethical knowledge.
|----
|| discussion part|| statement|| väite|| Proposition claiming that something is true or ethically good. A statement may be developed in a discussion by adding and organising related argumentation (according to pragma-dialectics), or by organising premises and inference rules (according to Perelman).
|----
|| statement|| value statement|| arvoväite|| Proposition claiming that something is ethically good, better than something else, prioritised over something, or how things should be.
|----
|| statement|| fact statement|| faktaväite|| Proposition claiming how things are or that something is true.
|----
|| value statement|| true value statement|| tosi arvoväite|| A statement that has not been successfully invalidated.
|----
|| value statement|| false value statement|| epätosi arvoväite|| A statement that has been successfully invalidated.
|----
|| fact statement|| true fact statement|| tosi faktaväite||
|----
|| fact statement|| false fact statement|| epätosi faktaväite||
|----
|| statement|| true statement|| tosi väite||
|----
|| statement|| false statement|| epätosi väite||
|----
|| statement|| opening statement|| avausväite|| A statement that is the basis for a structured discussion, a priori statement.
|----
|| statement|| closing statement|| lopetusväite|| A statement that is the resolution of a structured discussion, a posteriori statement. Closing statement becomes an opening statement when the discussion is opened again.
|----
|| opening statement|| fact opening statement|| avausfaktaväite||
|----
|| closing statement|| fact closing statement|| lopetusfaktaväite||
|----
|| opening statement|| value opening statement|| avausarvoväite||
|----
|| closing statement|| value closing stetement|| lopetusarvoväite||
|----
|| discussion part|| argument|| argumentti|| A statement that has also contains a relation to its target as an integral part. Due to this relation, arguments appear inside discussions and target directly or indirectly the opening statement.
|----
|| discussion part|| argumentation|| väittely|| Hierarchical list of arguments related to a particular statement.
|----
|| information object|| knowledge crystal part|| tietokideosa|| This is shown separately to illustrate that the objects are actually linked by has part rather than has subclass relation.
|----
|| knowledge crystal part|| question|| kysymys|| A research question asked in a knowledge crystal. The purpose of a knowledge crystal is to answer the question.
|----
|| knowledge crystal part|| answer|| vastaus|| An answer or set of answers to the question of a knowledge crystal, based on any relevant information and inference rules.
|----
|| knowledge crystal part|| rationale|| perustelut|| Any data, discussions, calculations or other information needed to convince a critical rational reader that the answer of a knowledge crystal is good.
|----
|| knowledge crystal part|| answer part|| vastausosa|| This is shown separately to illustrate that the objects are actually linked by has part rather than has subclass relation.
|----
|| answer part|| result|| tulos|| The actual, often numerical result to the question, conditional on relevant indices.
|----
|| answer part|| index|| indeksi|| A list of possible values for a descriptor. Typically used in describing the result of an ovariable.
|----
|| answer part|| conclusion|| päätelmä|| In an assessment, a textual interpretation of the result. Typically a conclusion is about what decision options should or should not be rejected and why based on the result.
|----
|| knowledge crystal part|| ovariable|| ovariable|| A practical implementation of a knowledge crystal in modelling code. Ovariable takes in relevant information about data and dependencies and calculates the result. Typically implemented in R using OpasnetUtils package and ovariable object type.
|----
|| ovariable|| key ovariable|| avainovariable|| An ovariable that is shown on an insight network even if some parts are hidden due to practical reasons.
|----
|| information object|| publication|| julkaisu|| Any published report, book, web page or similar permanent piece of information that can be unambiguously referenced.
|----
|| publication|| nugget|| tiedomuru|| An object that is not editable by other people than a dedicated author (group).
|----
|| substance|| topic|| aihe|| A description of an area of interest. It defines boundaries of a content rather than defines the content itself, which is done by statements. When the information structure is improved, a topic often develops into a question of a knowledge crystal, while a statement develops into an answer of a variable.
|----
|| priority|| objective|| tavoite|| A desired outcome of a decision. In shared understanding description, it is a topic (or variable) that has value statements attached to it.
|----
|| substance|| risk factor|| riskitekijä||
|----
|| substance|| indicator|| indikaattori|| Piece of information that describes a particular substantive item in a practical and often standard way.
|----
|| indicator|| risk indicator|| riski-indikaattori|| Indicator about (health) risk or outcome
|----
|| information object|| data|| tietoaineisto||
|----
|| information object|| graph|| kuvaaja|| Graphical representation of a piece of information. Typically is related to an information object with ''describes'' relation.
|----
|| work|| data work|| tietotyö||
|----
|| work|| data use|| tiedon käyttö||
|----
|| substance|| priority|| prioriteetti||
|----
|| substance|| expense|| kustannus||
|----
|| substance|| health impact|| terveysvaikutus||
|----
|| stakeholder|| decision maker|| päättäjä||
|----
|| stakeholder|| public officer|| virkamies||
|----
|| stakeholder|| assessor|| arvioija||
|----
|| stakeholder|| expert|| asiantuntija||
|----
|| stakeholder|| citizen|| kansalainen||
|----
|| stakeholder|| agent|| toimija||
|----
|| action|| task|| toimenpide|| action to be taken when the option has been selected
|----
|| action|| decision|| päätös|| action to be taken when the option is yet to be selected. Describes a particular event where a decision maker chooses among defined alternatives. This may also be a part of an assessment under heading Decisions and scenarios.
|----
|| action|| work|| työ|| continuous actions of the same kind and typically independent of the decision at hand. If the decision changes work routines, the action to make this change happen is called task.
|----
|| work|| prevention|| ennaltaehkäisy|| trying to prevent something
|----
|| work|| treatment|| hoito|| trying to fix something when something has already happened
|----
|| work|| support|| tuki|| work that aids in the completion of the selected option, in whatever way
|----
|| method|| open policy practice|| avoin päätöksentekokäytäntö|| framework for planning, making, and implementing decisions
|----
|| method|| open assessment|| avoin arviointi|| method answering this question: How can factual and value information be organised for supporting societal decision making when open participation is allowed?
|----
|| method|| analysis|| analyysi||
|----
|| method|| reporting|| raportointi||
|----
|| method|| measurement|| mittaus||
|----
|| publication|| study|| tutkimus||
|----
|| publication|| encyclopedia article|| ensyklopedia-artikkeli|| An object that describes a topic rather than answers a specific research question.
|----
|| publication|| lecture|| luento|| Contains a piece of information that is to be mediated to a defined audience and with a defined learning objective.
|----
|| method|| procedure|| toimintamalli||
|----
|| method|| principle|| periaate|| a short generic guidance for information work to ensure that the work is done properly. They especially apply to the execution phase.
|----
|| principle|| intentionality|| tavoitteellisuus|| See Table 3 for explanations.
|----
|| principle|| causality|| syysuhteiden kuvaus||
|----
|| principle|| criticism|| kritiikki||
|----
|| principle|| permanent resource locations|| kohteellisuus||
|----
|| principle|| openness|| avoimuus||
|----
|| principle|| reuse|| uusiokäyttö||
|----
|| principle|| use of knowledge crystals|| tietokiteiden käyttö||
|----
|| principle|| grouping|| ryhmäytyminen|| Facilitation methods are used to promote the participants' feeling of being an important member of a group that has a meaningful purpose.
|----
|| principle|| respect|| arvostus|| Contributions are systematically documented and their merit evaluated so that each participant receives the respect they deserve based on their contributions.
|----
|| objective|| expense objective|| kustannustavoite||
|----
|| process|| step|| jakso|| one of sequential time intervals when a particular kind of work is done in decision support. In the next step, the nature of the work changes.
|----
|| step|| impact assessment|| vaikutusarviointi|| the first step in a decision process. Helps in collecting necessary information for making a decision.
|----
|| step|| decision making|| päätöksenteko|| the second step in a decision process. When the decision maker actually chooses between options.
|----
|| step|| implementation|| toimeenpano|| the third step in a decision process. When the chosen option is put in action.
|----
|| step|| evaluation|| evaluointi|| the fourth step in a decision process. When the outcomes of the implementation are evaluated.
|----
|| process|| phase|| vaihe|| one part of a decision work process where focus is on particular issues or methods. Typically phases overlap temporally.
|----
|| phase|| shared understanding|| jaettu ymmärrys|| documenting of all relevant views, facts, values, and opinions about a decision situation in such a way that agreements and disagreements can be understood
|----
|| phase|| execution|| toteutus|| production of necessary information for a decision at hand
|----
|| phase|| evaluation and management|| seuranta ja ohjaus|| ensuring that all work related to a decision will be, is, and has been done properly
|----
|| phase|| co-creation|| yhteiskehittäminen|| helping people to participate, contribute, and become motivated about the decision work
|----
|}


Open policy practice was designed so that it could incorporate any relevant information from experts and stakeholders. In practice, it is not used for that purpose, because it does not have an established role in decision process. For many, non-public lobbying, demonstrations and even spreading faulty information are more effective ways of influencing the outcome of a decision. All these methods deviate from the ideal of evidence-based policy, and therefore further studies are needed specifically on how this information is better incorporated into  and made available via shared understanding and whether that improves decision processes. If shared understanding is able to offer acceptable solutions to disagreeing parties, it reduces the need to use political force, but so far we have too little experience on that to make conclusions. We also don't yet know whether quiet people or marginal group have better visibility with such a web-workspace, but in theory, their capabilities are better.
=== Relation types ===


It is also critical that the archival process is supported by the workspace. In Opasnet, it requires little extra work, as the information is produced in a proper format for archiving, backups are produced automatically, and it is easy to produce a snapshot of a final assessment. There is no need to copy information from one repository to another, but it is also easy to store final assessments in external open data repositories.
Relations are edges between items (or nodes). A relation I is said to be an inverse of relation R, iff, for all items subject and object, claim "subject R object" is always equal to claim "object I subject".
 
=== Usability ===
 
{{attack|#|Boundary object lisättävä. Myös viite.
 
Myös maininta erilaisista työkaluista, joita jo nyt on tarjolla (Appendix 4.)|--[[User:Jouni|Jouni]] ([[User talk:Jouni|talk]]) 14:50, 24 January 2018 (UTC)}}
 
We have found it very useful to structure pages so that it starts with a summary, then describes the research question and gives a more detailed answer, and finally provides a user with relevant and increasingly detailed information in the rationale. On the other hand, some people have found this structure confusing as they don't expect to see all the details of an assessment. This emphasises the need to publish easy summaries in other places as well, e.g. in blogs, newsletters, or policy briefs.
 
A strength of shared understanding is in its capability of clarifying complex issues and eliciting implicit values and reasonings. It facilitates rational discussion about a decision, and it can also be used for creating policital pressure against options that are not well substantiated. For example, the shared understanding about the research strategy of THL (see Table 8.) was  produced mostly based on critical discussions rather than scientific analysis. The process was well received and many participants found the discussions and syntheses illuminating.
 
Shared understanding can incorporate complex ideas from detailed discussions and also quantitative data from surveys. Often many different information sources and analyses are needed to produce a full picture. Indeed, open policy practice method was developed using many approaches simultaneously: information structures, online tools, and testing with actual cases and different pracices. This seemed to be a viable strategy. However, even more emphasis should have been put to usability and user friendliness of tools, and building user community.
 
To our experience, the slow development of the user community is partly but not mainly due to non-optimal usability of tools: both Mediawiki and especially R require some learning before they can be effectively used. All students in the courses have been able to learn and effectively operate in Opasnet, so the learning challenge can be overcome if users are motivated. A bigger problem has been to motivate people to change their practices about how to perform or participate in assessments.
 
=== Acceptability ===
 
A central theme related to acceptability is openness. Although it is a guiding principle in science, it is actually in conflict with many current practices. For example, it is common to hide expert work until it has been finalised and published, preferably in a peer-reviewed journal. Therefore, a demand to work openly and describe all reasoning and data already from the beginning is often seen as an unreasonable requirement, and it is a major reason for lack of participation. This observation has raised two opposite conclusions: either that openness should be promoted actively in all research and expert work, including decision support; or that openness as an objective is unnecessary and hinders expert work.
 
We have heard several objections against openness. People are concerned that expertise is not given proper weight, if open participation is allowed. People fear that strong lobbying groups hijack the process. People fear that self-organised groups produce low-quality information or even malevolent dis-information. Experts often demand the final say as the ultimate quality criteria, rather than trusting that data, reasoning, and criticism would do a better job. In brief, experts commonly think that it is simply easier and more efficient to produce high-quality information in closed groups.
 
An interesting example of this is the website Integrated Environmental Health Impact Assessment System (IEHIAS) that was created by two EU-funded projects, Intarese and Heimtsa. During the projects there was discussion about whether the website should be open (and integrated with Opasnet) or closed (and run separately) when it was built. Clear majority of researchers wanted to work in a closed system and especially retain control over their pages until they were finalised. But at the end of the projects, all content was unanimously published with an open license, and a few years later when the maintenance funding ended, all content was moved from IEHIAS website to Opasnet. In hindsight, developing two different websites with two different platforms and then finally merging them took a lot of extra resources and gave little added value.
 
The openness of scientific publishing is increasing and many resarch funders demand publishing of data, so this megatrend in the scientific society is changing common practices. It has already been widely acknowledged that the current mainstream of proprietary (as contrast to open access) scientific publishing is a hindrance to spreading ideas and ultimately science. In addition, Wikipedia has shown that self-organised groups can indeed produce high-quality content<ref>Giles J. Internet encyclopaedias go head to head. Nature 2005;438:900–901 doi:10.1038/438900a</ref>. Our own experience is the same, and we have not seen hijacking, malevolent behaviour or low-quality junk contributions. We have, however, seen some robots producing unrelated advertisement material on Opasnet pages, but that is easy to identify and remove, and it has not become a problem.
 
All opinions are critically evaluated in open policy practice, and some of them will be found unsubstantiated. A decision maker is more likely to ignore them than without identification of their problems. However, shared understanding does not contain an idea that proponents of unsubstantiated thoughts should be forced or pressured to reject them, it merely points out these discrepancies and thus nudges participants away from them. It also aims to inform the public so that they can put political pressure against poor ideas. Indeed, one of the ideas behind the method is that it should be good in identifying poor ideas rather than having power to distinguish the best decision option from other good ones.
 
Sometimes people tend to stick to their original ideas. This is seen in politics, where a key purpose of information is to justify existing opinions rather than adjust them by learning<ref name="jussila2012"/>; and as acquiescence, i.e. situations where people know that their choice is irrational but they choose it anyway<ref>Walco DK, Risen, JL. The Empirical Case for Acquiescing to Intuition. PSYCHOLOGICAL SCIENCE 2017;28:12:1807-1820. doi:10.1177/0956797617723377</ref>. Shared understanding may help in this, as the content of an opinion and the reasons to reject it become more explicit and external, and thus the opinion may be more easily affected by the person themself or by others. Research should be performed on this particular topic.
 
Shared understanding has been a well accepted idea among decision makers in Finland. This was observed in collaboration with Prime Minister's Office of Finland, when they soon adopted the light version of the term and started to use it in their own discussions and publications<ref>Dufva M, Halonen M, Kari M, Koivisto T, Koivisto R, Myllyoja J. Kohti jaettua ymmärrystä työn tulevaisuudesta [Toward a shared understanding of the future of work]. Helsinki: Prime Minister's Office: Publications of  the Govenrment's analysis, assessment and research activities 33; 2017. (in Finnish) http://tietokayttoon.fi/julkaisu?pubid=18301. Accessed 24 Jan 2018.</ref><ref>Oksanen K. Valtioneuvoston tulevaisuusselonteon 1. osa. Jaettu ymmärrys työn murroksesta [Government Report on the Future Part 1. A shared understanding of the transformation of work]
Prime Minister’s Office Publications 13a; 2017. (in Finnish) http://urn.fi/URN:ISBN:978-952-287-432-0. Accessed 24 Jan 2018.</ref>. Sometimes it is better to aim to understanding rather than consensus, and this idea was well accepted.
 
One of the criteria for development of open policy practice was that it should be versatile and usable for any expert or policy work. However, the implementation of the method can start gradually, like Prime Minister's Office started to use useful concepts and tested online editing but were reluctant to deeply commit to open assessments. Also, data visualisations showed immediate utility and were seen as an important way to communicate expert knowledge. In contrast, the full open policy practice method contains many parts that are clearly against the current practices and therefore need thorough testing and evaluation before they can be widely accepted and adopted. For example, the discussion about what parts of a law making process could be opened to the public has just started. The current default is that little is opened except some hearings that are required by law.
 
As mentioned before, accumulation of scientific merit is a key motivation, and established processes, journals, and scientific committees fulfil this purpose. To maximise the societal impact, an expert should possibly write to Wikipedia rather than a scientific specialist journal. However, such activity gives little merit. Indeed, only 7 % of people contributing to Wikipedia do it for professional reasons<ref>Pande M. Wikipedia editors do it for fun: First results of our 2011 editor survey. 2011. https://blog.wikimedia.org/2011/06/10/wikipedia-editors-do-it-for-fun-first-results-of-our-2011-editor-survey/. Accessed 24 Jan 2018.</ref>. Open assessments seem to suffer a similar problem as non-established scientific endeavour.
 
Because presenting also controversial and unpopular ideas is a prerequisite for a complete shared understanding, it is very important that such activity is encouraged and respected. A community producing shared understanding should cherish such attitude and  promote contributions even if they are incompatible with scientific or another paradigm. This is challenging to both someone presenting such opinions and someone else personally against the presented opinion. The situation requires that all parties have faith in the process and its capability to produce fair conclusions. Therefore, acceptability of open, editable pages and contributions to such pages should be promoted and motivated as a part of method development.
 
=== Efficiency ===
 
Resources needed to do a single assessment online are not that different from doing it offline. R is a powerful tool for modelling and code can be shared with those who understand it. A spreadsheet model would be readable by a larger group, but modelling, co-creation, and/or archiving functionalities are far behind from R. Emphasis must be put on documentation to ensure availability and usability of assessments, if the model code itself is not understandable to end users.
 
We have put effort in developing reusable modules and data sources for environmental modelling. The work has been successful in producing building blocks that have been used in several assessments about e.g. fish consumption and thus reduced the marginal costs of a new assessment about a related topic (see Table 8). However, the use of these modelling blocks has not been large outside THL, so there is much more potential for inter-assessment efficiency. Again, the main challenge is about building a community for assessments and decision support and also include people who can model.
 
A major issue in spreading practices to new users is that there is a lot to learn before these tools can be used. Even experts with substantive knowledge need time to familiarise with the modelling language used unless they know it already. This has lead to a situation where the lack  of common modelling language hampers cooperation of experts on a detailed level. The most common solution that we have seen is strict division of tasks, specified data interfaces, and sequential work. In practice, one group produces data about e.g. emissions of a pollutant, another group takes that data and calculates exposures, and a third group estimates health impacts after the first two steps are finalised. Detailed understanding of and contributions to other groups' work remain low. On the other hand, most researchers are happy in their own niche and don't expect that other experts could or should learn the details of their work. Consequently, the need for shared tools is often considered low.
 
Due to these reasons, facilitation is necessary to increase understanding across disciplines and reuse of tools among experts. It is also necessary to improve the readability of the content, help non-experts to understand complex issues, and keep discussion on focus. All this increases the usability of open assessments and makes the next assessment easier.
 
So far, there is no data about how much archetypes and paradigms are able to summarise values and reasoning rules and thus make policy discussions more precise and efficient. The popularity of voting advice applications demonstrates that there is a societal need for value analysis and aggregation. Our hypothesis is that several archetypes that describe common and important values in the population will increase in popularity and form kind of virtual parties that represent the population well with key values, while less important values will not raise in a similar way.
 
Also, we hypothesise that only a few major paradigms will emerge, and those are ones whose applicability is wide and independent of the discipline. Scientific paradigm is expected to be one of them, and it will be interesting to see what else emerges. People commonly reason against some unintuitive rules of the scientific method (e.g. they try to prove a hypothesis right rather than wrong) but it is not clear whether this will cause a need to develop a paradigm for an alternative approach. It is even not clear whether people are willing to accept the idea that there could be different, competing rules for reasoning in a single assessment or decision support process.
 
== Discussion ==
 
The experience about open policy practice demonstrates that the method works as expected if the participants are committed to the methods, practices, and tools. However, there have been less participants in most open assessments that what was hoped for, and the number of experts or decision makers who actively read from or contribute to Opasnet website has remained lower than expected, although 90000 pageloads per year is still a fair amount. This is partly due to insufficient marketing, as reader numbers have gone clearly up with assessments that have gained media coverage and public interest (e.g. the transport and communication strategy).
 
The principles behind open policy practice are not unique; on the contrary, they have mostly been borrowed from good practices of various disciplines. Many principles from the original collection<ref name="ora2007"/> have increased in popularity. Openness in science is a current megatrend, and its importance has been accepted much more widely than what was the case in 2006 when Opasnet was launched. Not surprisingly, there are several other websites and organisations that promote one or more of the same principles. Some of them are described here.
 
Open Science Framework is a project that aims to increase reproducibility in science by developing structured protocols for reproducing research studies, documenting study designs and results online, and producing open source software and preprint services to support this<ref>Open Science Framework. https://osf.io/. Accessed 24 Jan 2018.</ref>. The Framework maintains a web-workspace for documenting research as it unfolds rather than only afterwards in articles.
 
Omidyar Network is an organisation that gives grants to non-profit organisations and also invest in startups that promote e.g. governance and citizen engagement<ref>Omidyar Network. A world of positive returns. http://www.omidyar.com. Accessed 24 Jan 2018.</ref>. As an example, they support tools to improve discussion online with annotations<ref>Hypothesis. Annotate the web, with anyone, anywhere. https://web.hypothes.is/. Accessed 24 Jan 2018.</ref>, an objective similar to with structured discussions.
 
ArXiv.org is a famous example of preprint servers offering a place for publishing and discussing manuscripts before peer review<ref>Cornell University Library. arXiv.org. https://arxiv.org/. Accessed 24 Jan 2018.</ref>. Such websites, as well as open access journals, have increased during recent years as the importance of availability of scientific information has been understood. Using open data storages (openscience.fi/ida) for research results are often required by research funders. Also governments have been active in opening data and statistics to wide use (data.gov.uk). Governance practices have been developed towards openness and inclusiveness, promoted by international initiatives such as Open Government Partnership (www.opengovpartnership.org). In brief, facilities for openness and inclusiveness in science and governance are increasing rapidly, and with the current rate, the practices will change radically in the next ten years.
 
Also non-research domains show increasing openness. Shared online tools such as Google Drive (drive.google.com), Slack (slack.com), and others have familiarised people with online collaboration and idea that information is accessible from anywhere. Open platforms for deliberation of decisions are available (otakantaa.fi, kansalaisaloite.fi), and sharing of code is routinely done via large platforms (github.com, cran.r-project.org). The working environment has changed much faster than the practices in research and societal decision making.
 
As an extreme, a successful hedge fund Bridgewater Associates implements radical openness and continuous criticism of all ideas presented by its workers rather than letting organisational status determine who is heard<ref>Dalio R. Principles: Life and work. New York: Simon & Shuster; 2017. ISBN 9781501124020</ref>. In a sense, they are implementing the scientific method in much more rigorous way than what is typically done in science. All in all, despite challenges in practical implementation, the principles found in open policy practice have gained popularity and produced success stories in many areas, and they are more timely than before.
 
Despite all this progress, the websites and tools mentioned above do not offer a place for open topic-wise scientific information production and discussion that would also support decision making. This could be achieved by merging the functionalities of e.g. Opasnet, Open Science Framework, open data repositories, and discussion forums. Even if different tasks would happen on separate websites, they could form an integrated system to be used by decision makers, experts, stakeholders, and machines. Resource description framework and ontologies could be helpful in organising such a complex system.
 
To keep expert and decision making practices up to the recent progress, there is a need for tools and also training designed to facilitate a change. New practices could also be promoted by developing ways to give merit and recognition more directly based on participation in co-creation online. The current publication counts and impact factors are very indirect measures of  societal or scientific importance of the information produced.
 
In this article, we have demonstrated methods and practices that have already been successfully used in decision support. However, there are many parts that have been thought as important parts of open policy practice but that have not yet been extensively tested. There is still a lot to learn about using co-created information in decision making. However, experiences so far have demonstrated that decision making can be more evidence-based than what it is today, and several tools promoting this change are available to us. This is expected to reduce the influence of a single leader or decision maker, resulting in more stable and predictable policies.
 
== Conclusions ==
 
In conclusion, we have demontrated that open policy practice works technically as expected. Open assessments can be performed openly online. They do not fail due to reasons many people think they will, namely low quality contributions, malevolent attacks or chaos caused by too many uninformed participants; these phenomena are very rare. Shared understanding has proved to be a useful concept that guides policy processes toward more collaborative approach, whose purpose is wider understanding rather than winning.
 
However, open policy practice has not been adopted in expert work or decision support as expected. Key hindrances have been that it offers little added efficiency or quality of content for a single task by an expert or a decision maker, although its impacts on the overall process are positive. The increased availability, acceptability, and inter-assessment efficiency have not been recognised by the scientific or policy community.
 
Active facilitation, community building and improving the user-friendliness of the tools were identified as key solutions in improving usability of the method in the future.
 
== List of abbreviations ==
 
* THL: National Institute for Health and Welfare (government research institute in Finland)
* IEHIAS: Integrated Environmental Health Impact Assessment System (a website)
 
== Declarations ==
 
*    Ethics approval and consent to participate: Not applicable
*    Consent for publication: Not applicable
*    Availability of data and materials: The datasets generated and/or analysed during the current study are available in the [NAME] repository, [PERSISTENT WEB LINK TO DATASETS] OPASNET AND ANOTHER
*    Competing interests: The authors declare that they have no competing interests.
*    Funding: This work resulted from the BONUS GOHERR project (Integrated governance of Baltic herring and salmon stocks involving stakeholders, 2015-2018) that was supported by BONUS (Art 185), funded jointly by the EU, the Academy of Finland and and the Swedish Research Council for Environment, Agricultural Sciences and Spatial Planning. Previous funders of the work: Centre of Excellence for Environmental Risk Analysis 2002-2007 (Academy of Finland), Beneris 2006-2009 (EU FP6 Food-CT-2006-022936), Intarese 2005-2011 (EU FP6 Integrated project in Global Change and Ecosystems, project number 018385), Heimtsa 2007-2011 EU FP6 (Global Change and Ecosystems project number GOCE-CT-2006-036913-2), Plantlibra 2010-2014 (EU FP7-KBBE-2009-3 project 245199), Urgenche 2011-2014 (EU FP7 Call FP7-ENV-2010 Project ID 265114), Finmerac 2006-2008 (Finnish Funding Agency for Innovation TEKES), Minera 2010-2013 (European Regional Development Fund), Scud 2005-2010 (Academy of Finland, grant 108571), Bioher 2008-2011 (Academy of Finland, grant 124306), Claih 2009-2012 (Academy of Finland, grant 129341), Yhtäköyttä 2015-2016 (Prime Minister's Office, Finland).
*    Authors' contributions: JT and MP jointly developed the open assessment method and open policy practice. JT launched Opasnet web-workspace and supervised its development. TR developed OpasnetUtils software package from an original idea by JT and implemented several assessment models. All authors participated in several assessments and discussions aout methods. JT wrote the first manuscript draft based on materials from MP, PM, AA, and TR. All authors read and approved the final manuscript.
*    Acknowledgements: We thank Einari Happonen and Juha Villman for their work in developing Opasnet; and John S. Evans, Alexanda Gens, Patrycja Gradowska, Päivi Haapasaari, Sonja-Maria Ignatius, Suvi Ignatius, Matti Jantunen, Anne Knol, Sami Majaniemi, Kaisa Mäkelä, Raimo Muurinen, Jussi Nissilä, Juha Pekkanen, Mia Pihlajamäki, Teemu Ropponen, Simo Sarkki, Marko Tainio, Peter Tattersall, Jouko Tuomisto, and Matleena Tuomisto for crucial and inspiring discussions about methods and their implementation, and promoting these ideas on several forums.
 
== Endnotes ==
 
'''<sup>a</sup>''' This paper has its foundations on environmental health, but the idea of decision support necessarily looks at aspects seen relevant from the point of view of the decision maker, not from that of an expert in a particular field. Therefore, this article and also the method described are deliberately taking a wide view and covering all areas of expertise. However, all practical case studies have their main expertise needs in public health, and often specifically in environmental health. '''<sup>b</sup>''' Whenever this article presents a term in italic (e.g. ''open assessment''), it indicates that there is a page on the Opasnet web-workspace describing that term and that it can be accessed using a respective link (e.g. http://en.opasnet.org/w/Open_assessment). '''<sup>c</sup>''' Extended causal diagram was originally called ''pyrkilo''; the word was invented in 1997. It is Finnish and a free translation is "an object or process that tends to produce or aims at producing certain kinds of products." The reasoning for using the word was that pyrkilo diagrams tend to improve understanding and thus decisions. The first wiki website was also called Pyrkilo, but the name was soon changed to Opasnet. '''<sup>d</sup>''' The database consists of two parts: MongoDB contains the actual data, and a related MySQL database contains metadata about all tables. The first version of the database used only MySQL but it was not optimal for data with no predefined structure.
 
== References and notes ==
 
<references/>
 
== Figures and tables ==
 
Move them here for submission.
 
== Appendix 1: Objects and ontologies ==
 
<gallery widths=400 heights=350>
File:Why dioxin is a problem.png|Figure A1-1. Extended causal diagram about dioxins in Baltic fish. The focus is on reasoning and value judgements and their connections to causal chains about dioxins and health.
File:Legend for extended causal diagrams.png|Figure A1-2. Legend for different objects used in extended causal diagrams.
</gallery>
:''From [[Universal object]]
 
{{attack|#|Määritelmät uusittava. Myös muut oliot|--[[User:Jouni|Jouni]] ([[User talk:Jouni|talk]]) 15:11, 8 January 2018 (UTC)}}
 
We wanted to develop a simple first classification for information objects and concluded that all relevant information falls into the following broad ''topical information areas''. First, there are '''substantive''' information about the actions, their impacts, and finally objectives. Second, there is information about the '''decision process''', e.g. about who has the power to decide and when. Third, there are '''stakeholders''', i.e. participants and other people and organisations that are relevant in the decision process or information production. Fourth, '''tasks''' are performed to produce all the information needed for a good decision. Fifth, there are information '''methods''' such as open assessment or statistical analyses that describe how the information work should be performed. Finally, if none of these topical areas fits, then the piece is probably '''irrelevant''' to the decision. However, sometimes this irrelevance is important to document explicitly to discourage its unsubstantiated use.


{| {{prettytable}}
{| {{prettytable}}
|+'''Object and information types
|+'''Table S3-2. Relation types used in open policy ontology.
|----
! Class|| English name|| Finnish name|| English inverse|| Finnish inverse|| Description
|----
|| relation|| participatory link|| osallisuuslinkki|| || || The subject is a stakeholder that has a particular role related to an object
|----
|| relation|| operational link|| toimintolinkki|| || || The subject has some kind of practical relation to the object (a fairly wide class)
|----
|| relation|| evaluative link|| arvostuslinkki|| || || The subject shows preference or relevance about the object
|----
|| relation|| referential link|| viitelinkki|| || || The subject is used as a reference of a kind for the object
|----
|| relation|| argumentative link|| argumentaatiolinkki|| || || The subject is used as an argument to criticise the object.
|----
|| relation|| causal link|| syylinkki|| || || The subject has causal effect on the object (or vice versa in the case of an inverse relation)
|----
|| relation|| property link|| ominaisuuslinkki|| || || The object describes a defined property of the subject.
|----
|| causal link|| negative causal link|| negatiivinen syylinkki|| || || The subject reduces or diminishes the object.
|----
|| causal link|| positive causal link|| positiivinen syylinkki|| || || The subject increases or enhances the object.
|----
|| negative causal link|| decreases|| vähentää|| is decreased by|| vähentyy||
|----
|| positive causal link|| increases|| lisää|| is increased by|| lisääntyy||
|----
|| negative causal link|| worsens|| huonontaa|| is worsened by|| huonontuu||
|----
|| positive causal link|| improves|| parantaa|| is improved by|| parantuu||
|----
|| negative causal link|| prevents|| estää|| is prevented by|| estyy||
|----
|| positive causal link|| enhances|| edistää|| is enhanced by|| edistyy||
|----
|| negative causal link|| impairs|| heikentää|| is impaired by|| heikentyy||
|----
|| positive causal link|| sustains|| ylläpitää|| is sustained by|| ylläpitäytyy||
|----
|| causal link|| affects|| vaikuttaa|| is affected by|| vaikuttuu||
|----
|| causal link|| indirectly affects|| vaikuttaa epäsuorasti|| indirectly affected by|| vaikuttuu epäsuorasti||
|----
|| causal link|| cause of|| syy|| caused by|| johtuu|| Wikidata property P1542
|----
|| causal link|| immediate cause of|| välitön syy|| immediately caused by|| johtuu välittömästi|| Wikidata property P1536
|----
|| causal link|| contributing factor of|| vaikuttava tekijä|| || || Wikidata property P1537
|----
|| participatory link|| performs|| toteuttaa|| performer|| toteuttajana|| who does a task?
|----
|| participatory link|| decides|| päättää|| decider|| päätäjänä||
|----
|| participatory link|| asks|| kysyy|| asker|| kysyjänä||
|----
|| participatory link|| participates|| osallistuu|| participant|| osallistujana||
|----
|| participatory link|| accepts|| hyväksyy|| accepted by|| hyväksyjänä||
|----
|| participatory link|| develops|| kehittää|| developed by|| kehittäjänä||
|----
|| participatory link|| proposes|| ehdottaa|| proposed by|| ehdottajana||
|----
|| participatory link|| answers|| vastaa|| answered by|| vastaajana||
|----
|| participatory link|| responsible for|| vastuussa|| responsibility of|| vastuullisena||
|----
|| participatory link|| negotiates|| neuvottelee|| negotiated by|| neuvottelijana||
|----
|| participatory link|| recommends|| suosittelee|| recommended by|| suosittelijana||
|----
|| participatory link|| controls|| kontrolloi|| controlled by|| kontrolloijana||
|----
|| participatory link|| claims|| väittää|| claimed by|| väittäjänä||
|----
|| participatory link|| owns|| omistaa|| owned by|| omistajana||
|----
|| participatory link|| does|| tekee|| done by|| tekijänä||
|----
|| participatory link|| maintains|| ylläpitää|| maintained by|| ylläpitäjänä||
|----
|| participatory link|| oversees|| valvoo|| overseen by|| valvojana||
|----
|| operational link|| has option|| omistaa vaihtoehdon|| option for|| vaihtoehtona||
|----
|| operational link|| has index|| omistaa indeksin|| index for|| indeksinä||
|----
|| operational link|| tells|| kertoo|| told by|| kertojana||
|----
|| operational link|| describes|| kuvaa|| described by|| kuvaajana||
|----
|| operational link|| maps|| kartoittaa|| mapped by|| kartjoittajana||
|----
|| operational link|| contains data|| sisältää dataa|| data contained in|| data sisältyy||
|----
|| operational link|| data for|| on datana|| gets data from|| saa datansa||
|----
|----
!Object !! Topical information areas !! Description
|| operational link|| uses|| käyttää|| is used by|| on käytettävänä|| an input (object) for a process (subject)
|----
|----
| Assessment
|| operational link|| produces|| tuottaa|| is produced by|| tuottajana|| Object is an output of a process produced by a stakeholder (subject)
| Tasks
| Assessment is a process for describing a particular piece of reality in aim to fulfil a certain information need in a decision-making situation. The word assessment can also mean the end product of this process, i.e. some kind of assessment report. Often it is clear from the context whether assessment means the doing of the report or the report itself. Methodologically, these are two different objects, called the assessment process and the assessment product, respectively. Unlike other universal objects, assessments are discrete objects having defined starting and ending points in time and specific contextually and situationally defined goals. Decisions included in an assessment are described within the assessment, and they are no longer described as variables. In [[R]], there was previously an S4 object called oassessment, but that is rarely if at all used nowadays.
|----
|----
| Variable
|| operational link|| provides|| varustaa|| is provided by|| varustajana||
| Substance
| Variable is a description of a particular piece of reality. It can be a description of physical phenomena, or a description of value judgments. Variables are continuously existing descriptions of reality, which develop in time as knowledge about them increases. Variables are therefore not tied into any single assessment, but instead can be included in other assessments. Variable is the basic building block of describing reality. In [[R]], variables are implemented using an S4 object called ovariable.
|----
|----
| Method
|| operational link|| about|| aiheesta|| || || a task is about a topic. This overlaps with has topic; merge them?
| Methods
| Method is a systematic procedure for a particular information manipulation [[process]] that is needed as a part of an [[assessment]] work. Method is the basic building block for describing the assessment work (not reality, like the other [[universal object]]s). In practice, methods are "how-to-do" descriptions about how information should be produced, collected, analysed, or synthesised in an [[assessment]]. Some methods can be about managing other methods. Typically, [[method]]s contain a software code or another algorithm to actually perform the method easily. Previously, there was a subcategory of method called tool, but the difference was not clear and the use of tool is depreciated as a formal object. In [[R]], methods are typically ovariables that contain dependencies and formulas for computing the result, but some context-specific information about dependencies are missing. Therefore, the result cannot be computed until the method is used within an assessment.
|----
|----
| Study
|| property link|| logical link|| looginen linkki|| || || Relations based on logic
| Substance
| Study is an information object that describes a research study and its [[answer]]s, i.e. observational or other data obtained. The study methods are described as the [[rationale]] of the object. Unlike traditional research articles, there is little or no discussion, because the interpretation of the results happens in other [[object]]s, typically in [[variable]]s for which the study contains useful information. A major difference to a [[variable]] is that the [[rationale]] of the [[study]] is fixed after the research plan has been fixed and work done, and also the [[answer]] including the study results is fixed after the data has been obtained and processed. The [[question]] (or scope) of a study reflects the generalisability of the study results, and it is open to discussion and subject to change also after the study has been finished. In contrast, in a variable the [[question]] is typically fixed, and the [[answer]] and [[rationale]] change as new information comes up.
|----
|----
| Lecture
|| property link|| set theory link|| joukko-oppilinkki|| || || Relations based on set theory
| Substance
| Lecture contains a piece of information that is to be mediated to a defined audience and with a defined learning objective. It can also be seen as a process during which the audience learns, instead of being a passive recipient of information.
|----
|----
| Encyclopedia article
|| set theory link|| part of|| osana|| has part|| sisältää osan|| is a part of a bigger entity, e.g. Venus is part of Solar System. Wikidata property P361 (part of) &amp; P527 (has part). Previously there were relations about a decision: substance of, decision process of, stakeholder of, method of, task of, irrelevant to. But these are depreciated and replaced by has part, because the class of the object makes specific relations redundant.
| Substance
| Encyclopedia articles are objects that do not attempt to answer a specific research question. Instead, they are general descriptions about a topic. They do not have a universal attribute structure.
|----
|----
| Nugget
|| set theory link|| context for|| kontekstina|| has context|| omistaa kontekstin||
| Substance
| Nugget is an object that was originally designed to be written by a dedicated (group of) person(s). Nuggets are not freely editable by others. Also, they do not have a universal structure.
|----
|----
| || Decision process ||  
|| set theory link|| has subclass|| omistaa alajoukon|| subclass of|| alajoukkona|| Wikidata property P279
|----
|----
| [[Discussion]] || Any ||
|| set theory link|| has instance|| omistaa instanssin|| instance of|| instanssina|| Object belongs to a set defined by the subject and inherits the properties of the set. Sysnonym for has item, which is depreciated. Wikidata property P31
|----
|----
| User pages || Stakeholders ||
|| logical link|| opposite|| vastakohta|| || || subject is opposite of object, e.g. black is opposite of white. Wikidata property P461; it is its own inverse
|----
|----
| || Irrelevant issues ||
|| logical link|| inverse|| toisinpäin|| || || a sentence is equal to another sentence where subject and object switch places and has the inverse relation. This is typically needed in preprocessing of insight networks, and it rarely is explicitly shown of graphs. Wikidata property P1696; it is its own inverse
|}
|----
 
|| logical link|| if - then|| jos - niin|| if not - then not|| jos ei - niin ei|| If subject is true, then object is true. Also the negation is possible: if - then not. This links to logical operators and, or, not, equal, exists, for all; but it is not clear how they should be used in an insight network.
{| {{prettytable}}
|----
|+ '''Vocabulary for open policy practice.
|| operational link|| prepares|| valmistelee|| prepared by|| valmistelijana||
|----
|| operational link|| pays|| kustantaa|| paid by|| kustantajana||
|----
|| operational link|| rationale for|| perustelee|| has rationale|| perusteltuu||
|----
|| operational link|| offers|| tarjoaa|| offered by|| tarjoajana||
|----
|| operational link|| executes|| suorittaa|| executed by|| suorittajana||
|----
|| operational link|| irrelevant to|| epärelevantti asiassa|| || || If there is no identified relation (or chain of relations) between a subject and an object, it implies that the subject is irrelevant to the object. However, sometimes people may (falsely) think that it is relevant, and this relation is used to explicate the irrelevance.
|----
|| evaluative link|| finds important|| kokee tärkeäksi|| is found important|| tärkeäksi kokijana||  
|----
|| evaluative link|| makes relevant|| tekee relevantiksi|| is made relevant|| relevantiksi tekijänä|| if the subject is valid in the given context, then the object is relevant. This typically goes between arguments, from a variable to value statement or from a value statement to a fact statement. This is a synonym of 'valid defend of type relevance'.
|----
|| evaluative link|| makes irrelevant|| tekee epärelevantiksi|| is made irrelevant|| epärelevantiksi tekijänä|| Opposite of 'makes relevant'. Synonym of 'valid attack of type relevance'.
|----
|----
! Label|| Description|| Properties or research question
|| evaluative link|| makes redundant|| tekee turhaksi|| is made redundant|| turhaksi tekijänä|| Everything that is said in the object is already said in the subject. This depreciates the object because it brings no added value. However, it is kept for archival reasons and to demonstrate that the statement was heard.
|----
|----
|| open policy practice|| framework for planning, making, and implementing decisions||  
|| evaluative link|| has opinion|| on mieltä|| || || Subject (typically a stakeholder) supports the object (typically a value or fact statement). This is preferred over 'values' and 'finds important' because it is more generic without loss of meaning.
|----
|----
|| open assessment|| method, information object|| How can information be organised for support societal decision making when open participation is allowed?
|| evaluative link|| values|| arvostaa|| valued by|| arvostajana|| A stakeholder (subject) gives value or finds an object important. Object may be a topic or statement. Depreciated, use 'has opinion' instead.
|----
|----
|| variable|| information object||  
|| evaluative link|| has truthlikeness|| on totuudellinen|| || || A subjective probability that subject is true. Object is a numeric value between 0 and 1. Typically this has a qualifier 'according to X' where X is the person or archetype who has assigned the probability.
|----
|----
|| method|| information object||  
|| evaluative link|| has preference|| mieltymys|| preference of|| mieltymyksenä|| Subject is better than object in a moral sense.
|----
|----
|| question|| part of information object||  
|| evaluative link|| has popularity|| on suosiossa|| || || A measure based on likes given by users.
|----
|----
|| answer|| part of information object||  
|| evaluative link|| has objective|| omaa tavoitteen|| objective of|| tavoitteena||  
|----
|----
|| rationale|| part of information object||  
|| argumentative link|| agrees|| samaa mieltä|| || ||  
|----
|----
|| knowledge crystal|| information object||  
|| argumentative link|| disagrees|| eri mieltä|| || ||  
|----
|----
|| discussion|| information object||  
|| argumentative link|| comments|| kommentoi|| commented by|| kommentoijana||  
|----
|----
|| statement|| part of discussion||  
|| argumentative link|| defends|| puolustaa|| defended by|| puolustajana||  
|----
|----
|| resolution|| part of discussion||  
|| argumentative link|| attacks|| hyökkää|| attacked by|| hyökkääjänä||  
|----
|----
|| argumentation|| part of discussion||  
|| argumentative link|| relevant argument|| relevantti argumentti|| || || Argument is relevant in its context.
|----
|----
|| argument|| part of argumentation||  
|| argumentative link|| irrelevant argument|| epärelevantti argumentti|| || || Argument is irrelevant in its context.
|----
|----
|| shared understanding|| state of the world, information object, phase in information work||
|| argumentative link|| joke about|| vitsi aiheesta|| provokes joke|| kirvoittaa vitsin|| This relation is used to describe that the subject should not be taken as information, even though it may be relevant. Jokes are allowed because they may help in creating new ideas and perspectives to an issue.
|----
|----
|| intentionality|| principle in execution||  
|| referential link|| topic of|| aiheena|| has topic|| aiheesta|| This is used when the object is a publication and the subject is a (broad) topic rather than a statement. In such situations, it is not meaningful to back up the subject with references. Useful in describing the contents of a publication, or identifying relevant literature for a topic.
|----
|----
|| causality|| principle in execution||  
|| referential link|| discussed in|| kerrotaan|| discusses|| kertoo||  
|----
|----
|| criticism|| principle in execution||  
|| referential link|| reference for|| viitteenä|| has reference|| viite|| Subject is a reference that backs up statements presented in the object. Used in the same way as references in scientific literature are used.
|----
|----
|| shared information objects|| principle in execution||  
|| referential link|| states|| väittää|| stated in|| väitetään kohteessa|| Describes the source of a statement; may also refer to a person.
|----
|----
|| openness|| principle in execution||  
|| referential link|| tag for|| täginä|| has tag|| omistaa tägin|| Subject is a keyword, type, or class for object. Used in classifications.
|----
|----
|| reuse|| principle in execution||  
|| referential link|| category for|| kategoriana|| has category|| kuuluu kategoriaan||  
|----
|----
|| execution|| phase in information work||  
|| referential link|| associates with|| liittyy|| || || Subject is associated with object in some undefined way. This is a weak relation and does not affect the outcomes of inferences, but it may be useful to remind users that an association exists and it should be clarified more precisely. This is its own inverse.
|----
|----
|| evaluation and management|| phase in information work||  
|| referential link|| answers question|| vastaa kysymykseen|| has answer|| vastaus|| Used between a statement (answer) and a topic (question). In knowledge crystals, the relation is embedded in the object structure.
|----
|----
|| co-creation and facilitation|| phase in information work||  
|| irrelevant argument|| irrelevant comment|| epärelevantti kommentti|| || || Inverses are not needed, because the relation is always tied with an argument (the subject).
|----
|----
|| decision support|| step in decision process||  
|| irrelevant argument|| irrelevant attack|| epärelevantti hyökkäys|| || ||  
|----
|----
|| decision making|| step in decision process||  
|| irrelevant argument|| irrelevant defense|| epärelevantti puolustus|| || ||  
|----
|----
|| implementation|| step in decision process||  
|| relevant argument|| relevant comment|| relevantti kommentti|| || ||  
|----
|----
|| properties of good assessment|| evaluation criterion, part of evaluation and management||  
|| relevant argument|| relevant attack|| relevantti hyökkäys|| || ||  
|----
|----
|| setting of assessment|| evaluation criterion, part of evaluation and management||  
|| relevant argument|| relevant defense|| relevantti puolustus|| || ||  
|----
|----
|| dimension of openness|| evaluation criterion, part of evaluation and management||
|| property link|| evaluative property|| arviointiominaisuus|| || || characteristic of a product or work that tells whether it is fit for its purpose. Especially used for assessments and assessment work.
|----
|----
|| category of interaction|| evaluation criterion, part of evaluation and management||  
|| evaluative property|| property of decision support|| päätöstuen ominaisuus|| || || What makes an assessment or decision support process fit for its purpose?
|----
|----
|| quality of content|| property of decision support|| Contains specificity, exactness and correctness of information in relation to the research question.
|| evaluative property|| setting of assessment|| arvioinnin kattavuus|| || || See Table 5.
|----
|----
|| applicability|| property of decision support|| Contains properties in relation to the user needs in a decision process
|| setting of assessment|| impacts|| vaikutukset|| || ||  
|----
|----
|| efficiency|| property of decision support|| Contains properties in relation to expenditure of producing the assessment output either in one assessment of in a series of assessments
|| setting of assessment|| causes|| syyt|| || ||
|----
|----
|| relevance|| part of applicability||  
|| setting of assessment|| problem owner|| asianomistaja|| || ||
|----
|----
|| availability|| part of applicability||  
|| setting of assessment|| target users|| kohderyhmä|| || ||  
|----
|----
|| usability|| part of applicability||  
|| setting of assessment|| interaction|| vuorovaikutus|| || ||
|----
|----
|| acceptability|| part of applicability||  
|| interaction|| dimension of openness|| avoimuuden ulottuvuus|| || || See Table 6.
|----
|----
|| impacts|| setting of assessment||  
|| dimension of openness|| scope of participation|| osallistumisen avoimuus|| || ||
|----
|----
|| causes|| setting of assessment||  
|| dimension of openness|| access to information|| tiedon avoimuus|| || ||  
|----
|----
|| problem owner|| setting of assessment||  
|| dimension of openness|| timing of openness|| osallistumisen ajoitus|| || ||  
|----
|----
|| target users|| setting of assessment||  
|| dimension of openness|| scope of contribution|| osallistumisen kattavuus|| || ||  
|----
|----
|| interaction|| setting of assessment||  
|| dimension of openness|| impact of contribution|| osallistumisen vaikutus|| || ||  
|----
|----
|| scope of participation|| dimension of openness||  
|| interaction|| category of interaction|| vuorovaikutuksen luokka|| || || See Table 2. How does assessment interact with the intended use of its results? Possible values: isolated (eristetty), informing (tiedottava), participatory (osallistava), joint (yhteistyöhakuinen), shared (jaettu).
|----
|----
|| access to information|| dimension of openness||  
|| property of decision support|| quality of content|| sisällön laatu|| || || See Table 4.
|----
|----
|| timing of openness|| dimension of openness||  
|| quality of content|| informativeness|| tarkkuus|| || ||  
|----
|----
|| scope of contribution|| dimension of openness||  
|| quality of content|| calibration|| harhattomuus|| || ||  
|----
|----
|| impact of contribution|| dimension of openness||  
|| quality of content|| coherence|| sisäinen yhdenmukaisuus|| || ||  
|----
|----
|| isolated|| category of interaction||  
|| property of decision support|| applicability|| sovellettavuus|| || ||  
|----
|----
|| informing|| category of interaction||  
|| applicability|| relevance|| merkityksellisyys|| || ||  
|----
|----
|| participatory|| category of interaction||  
|| applicability|| availability|| saatavuus|| || ||  
|----
|----
|| joint|| category of interaction||  
|| applicability|| usability|| käytettävyys|| || ||  
|----
|----
|| shared|| category of interaction||  
|| applicability|| acceptability|| hyväksyttävyys|| || ||  
|----
|----
|| assessor|| a role in decision process||  
|| property of decision support|| efficiency|| tehokkuus|| || ||  
|----
|----
|| manager|| a role in decision process||  
|| efficiency|| intra-assessment efficiency|| sisäinen tehokkuus|| || ||  
|----
|----
|| stakeholder|| a role in decision process||  
|| efficiency|| inter-assessment efficiency|| ulkoinen tehokkuus|| || ||  
|----
|----
|}
|}


== Appendix 2: Workspace tools: OpasnetUtils package and Opasnet Base ==
== Appendix S4: Workspace tools: OpasnetUtils package and Opasnet Base ==
 
:''From [[Ovariable]]


This structure is used in  R statistical software when building impact assessment models.
=== Ovariable ===


'''Question
Ovariable is an object class that is used in R to operationalise knowledge crystals. In essence, impact assessment models are built using ovariables as the main tool to organise, analyse, and synthesise data and causal relations between items. The purpose of ovariables is to offer a standardised, generalised, and modular solution to modelling. Standardised means that all ovariables have the same overall structure, and this makes it possible to develop generalised functions and processes to manipulate them. Modular structure of a model makes it possible to change pieces within the model without braking the overall structure of functionality. For example, it is possible to take an existing health impact model, replace the ovariable that estimates the exposure of the target population with a new one, and produce results that are otherwise comparable to the previous results but differ based on exposure.


What is the structure of an ovariable such that
What is the structure of an ovariable such that
Line 897: Line 1,469:
* it is able to implement different [[scenario]]s?
* it is able to implement different [[scenario]]s?


'''Answer
An ovariable contains the current best answer in a machine-readable format (including uncertainties when relevant) to the question asked by the respective knowledge crystal. In addition, it contains the information needed to derive the current best answer. The respective knowledge crystal typically has an own page at Opasnet, and the code to produce the ovariable is located on that page under subheading Calculations.
 
The ovariable is a class S4 object defined by OpasnetUtils in R software system. Its purpose is to contain the current best answer in a machine-readable format (including uncertainties when relevant) to the question asked by the respective variable. In addition, it contains information about how to derive the current best answer. The respective variable may have an own page in Opasnet, or it may be implicit so that it is only represented by the ovariable and descriptive comments within a code.


It is useful to clarify terms here. ''Answer'' is the overall answer to the question asked, so it is the reason for producing the Opasnet page in the first place. This is why it is typically located near the top of an Opasnet page. The answer may contain text, tables, or graphs on the web page. It typically also contains an R code with a respective ovariable, and the code produces these representations of the answer when run. (However, the ovariable is typically defined and stored under Rationale/Calculations, and the code under Answer only evaluates and plots the result.) ''Output'' is the key part (or slot) of the answer within an ovariable. All other parts of the ovariable are needed to produce the output, and the output contains what the reader wants to know about the answer. Finally, ''Result'' is the key column of the Output table (or data.frame) and contains the actual numerical values for the answer.  
It is useful to clarify terms here. ''Answer'' is the overall answer to the question asked (including an evaluated ovariable), and it is the reason for producing the knowledge crystal page in the first place. Answer is typically located near the top of the page to emphasise its importance. An answer may contain text, tables, or graphs on the web page. It typically also contains an R code for evaluating the respective ovariable. ''Output'' is the key part (technically a slot) of the answer within an ovariable and contains the details of what the reader wants to know about the answer. All other parts of the ovariable are needed to produce the output or understand its meaning. Finally, ''Result'' is the key column of the Output table (technically a data frame) and contains the actual numerical values for the answer.  


OpasnetUtils
'''Slots
Contains tools for open assessment and modelling in Opasnet,
a wiki-based web site and workspace for societal decision making
(see <http://en.opasnet.org/w/Main_Page> for more information).
The core principle of the workspace is maximal openness and modularity.
Variables are defined on public wiki pages using wiki inputs/tables,
databases and R code. This package provides the functionality to download and use these
variables. It also contains health impact assessment tools such as
spatial methods for exposure modelling.


=== Slots ===
The ovariable is a class S4 object defined by OpasnetUtils in R software system. An ovariable has the following separate ''slots'' that can be accessed using X@slot (where X is the name of the ovariable):
 
An ovariable has the following separate ''slots'' that can be accessed using X@slot (where X is the name of the ovariable):


;@name
;@name
*Name of <self> (the ovariable object) is a requirement since R doesn't support self reference.  
*Name of <self> (the ovariable object) is useful since R's S4 classes doesn't support self reference. It is used to identify relevant data structures as well as to set up hooks for modifiers such as scenario adjustments.


;@output
;@output
* The current best answer to the question asked.  
* The current best answer to the question asked.  
* A single ''data.frame'' (a 2D table type in R)
* A single data frame (a 2D table type in R)
* Not defined until <self> is evaluated.
* Not defined until <self> is evaluated.
* Possible types of columns:
* Possible types of columns:
** ''Result'' is the column that contains the actual values of the answer to the question of the variable. There is '''always''' a result column, but its name may vary; it is of type ovariablenameResult.
** ''Result'' is the column that contains the actual values of the answer to the question of the respective knowledge crystal. There is always a result column, but its name may vary; it is of type <self>Result.
** ''Indices'' are columns that define or restrict the Result in some way. For example, the Result can be given separately for males and females, and this is expressed by an index column ''Sex'', which contains values ''Male'' and ''Female''. So, the Result contains one row for males and one for females.
** ''Indices'' are columns that define or restrict the Result in some way. For example, the Result can be given separately for males and females, and this is expressed by an index column ''Sex'', which contains locations ''Male'' and ''Female''. So, the Result contains (at least) one row for males and one for females. If there are several indices, the number of rows is typically the product of numbers of locations in each index. Consequently, the output may become very large with several indices.
** ''Iter'' is a special kind of index used in Monte Carlo simulations. Iter contains the number of the iteration.
** ''Iter'' is a special kind of index used in Monte Carlo simulations. Iter contains the number of the iteration. In Monte Carlo, the model is typically run 1000 or 10000 times.
** ''Unit'' contains the unit of the Result. It may be the same for all rows, but it may also vary from one row to another. Unit is not an index.
** ''Unit'' contains the unit of the Result. It may be the same for all rows, but it may also vary from one row to another. Unit is not an index.
** Other, non-index columns can exist. Typically, they are information that were used for some purpose during the evolution of the ovariable, but they may be useless in the current ovariable. Due to these other columns, the output may sometimes be a very wide data.frame.
** Other, non-index columns can exist. Typically, they are information that were used for some purpose during the evolution of the ovariable, but they may be unimportant in the current ovariable if they have been inherited from parent ovariables. Due to these other columns, the output may sometimes be rather wide.


;@data
;@data
* A single ''data.frame'' that defines <self> as such.
* A single data frame that defines <self> as such.
* ''data'' slot answers this question: What measurements are there to answer the question? Typically, when data is used, the result can be directly derived from the information given (with possibly some minimal manipulation such as dropping out unnecessary rows).
* ''data'' slot contains data about direct measurements or estimates of the output. Typically, when data is used, the output can be directly derived from the information given, with possibly some manipulations such as dropping out unnecessary rows or interpreting given ranges or textual expressions as probability distributions.
* May include textual regular expressions that describe probability distributions which can be interpreted by [[OpasnetUtils/Interpret]].  
* Probability distributions are interpreted by ''OpasnetUtils/Interpret''.  


;@marginal
;@marginal
*A logical vector that indicates full marginal indices (and not parts of joint distributions, result columns or other row-specific descriptions) of @output.
*A logical vector that indicates full marginal indices (and not parts of joint distributions, result columns, or units or other row-specific descriptions) of output.


;@formula
;@formula
* A function that defines <self>.  
* A function that defines <self> using objects from dependencies as inputs.  
* Should return either a ''data.frame'' or an ''ovariable''.
* Returns either a data frame or an ovariable, which is then used as the output of the ovariable.
* ''@formula'' and ''@dependencies'' slots are always used together. They answer this question: How can we estimate the answer indirectly? This is the case if we have knowledge about how the result of this variable depends on the results of other variables (called parents). The @dependencies is a table of parent variables and their identifiers, and @formula is a function that takes the results of those parents, applies the defined code to them, and in this way produces the @output for this variable.
* Formula and dependencies slots are always used together. They estimate the answer indirectly in cases when there is knowledge about how this variable depends on the results of other variables (called parents). The slot dependencies is a table of parent variables and their identifiers, and formula is a function that takes the outputs of those parents, applies the defined code to them, and in this way produces the output for this variable.


;@dependencies
;@dependencies
*A ''data.frame'' that contains names and Rtools or Opasnet tokens/identifiers of variables required for <self> evaluation (list of causal parents). The following columns may be used:
* A data frame that contains names and tokens or identifiers for model runs of variables required for <self> evaluation (list of causal parents). The following columns may be used:
** Name: name of an ovariable or a constant (one-row numerical vector) found in the global environment (.GlobalEnv).
** Name: name of an ovariable or a constant found in the global environment (.GlobalEnv).
** Key: the run key (typically a 16-character alphanumeric string) to be used in objects.get() function where the run contains the dependent object.
** Key: the run key (typically a 16-character alphanumeric string) of a model run that is stored to Opasnet server. Key to be used in objects.get() function to fetch the dependent object.
** Ident: Page identifier and rcode name to be used in objects.latest() function where the newest run contains the dependent object. Syntax: "Op_en6007/answer".
** Ident: Page identifier and rcode name to be used in objects.latest() function where the newest run contains the dependent object. Syntax: "Op_en6007/answer".
*A way of enabling references in R (for in ''ovariables'' at least) by virtue of [[OpasnetUtils/ComputeDependencies]] which creates variables in ''.GlobalEnv'' so that they are available to expressions in @formula.
** Also other columns are allowed (e.g. Description), and they may contain additional information about parents.
*Variables are fetched and evaluated (only once by default) upon <self> evaluation.
* Dependencies is a way of enabling references in ovariables by using function OpasnetUtils/ComputeDependencies. It creates variables in .GlobalEnv environment so that they are available to expressions in formula.
* Dependent ovariables are fetched and evaluated (only once by default) upon <self> evaluation.


;@ddata
;@ddata
* A string containing an Opasnet identifier e.g. "Op_en1000". May also contain a subset specification e.g. "Op_en1000/dataset".
* A string containing an Opasnet identifier e.g. "Op_en1000". May also contain a subset specification e.g. "Op_en1000/dataset".
* This identifier is used to download data from the Opasnet database for the @data slot (only if empty by default) upon <self> evaluation.  
* This identifier is used to download data from the Opasnet database for the data slot (by default, only if empty) upon <self> evaluation.  
* By default, the data defined by ''@ddata'' is downloaded when an ovariable is created. However, it is also possible to create and save an ovariable in such a way that the data is downloaded only when the ovariable is evaluated.
* By default, the data defined by ddata is downloaded when an ovariable is created. However, it is also possible to create and save an ovariable in such a way that the data is downloaded only when the ovariable is evaluated.


;$meta
;@meta
* A list of descriptive information of the object. Typical information include date created, username of the creator, page identifier for the Opasnet page with the ovariable code, and identifier of the model run where the object was created.
* A list of descriptive information of the object. Typical information include date created, username of the creator, page identifier for the Opasnet page with the ovariable code, and identifier of the model run where the object was created.
* Other meta information can be added manually.


=== Decisions and other upstream orders ===
=== OpasnetUtils and operations with ovariables ===


The general idea of ''ovariables'' is such that they should not be modified to match a specific model but rather define the variable in question as extensively as possible under it's scope. In other words, it should answer its question in a reusable way so that the question and answer would be useful in many different situations. (Of course, this should be kept in mind already when the question is defined.) To match the scope of specific models, ovariables can be modified by supplying orders upstream. Evaluating a latent ovariable first triggers the evaluation of its unevaluated parent ovariables (listed in @dependencies) since their results are needed to evaluate the child. This chain of evaluation calls forms a recursion tree in which each upstream variable is evaluated exactly once (cyclical dependencies are not allowed). Decision orders about upstream variables are checked and applied upon their evaluation and propagate downstream to the first variable being evaluated. For example decisions in decision analysis can be supplied this way:
OpasnetUtils is an R package found in CRAN repository (cran.r-project.org). It contains tools for open assessment and modelling at Opasnet, especially for utilising ovariables as modelled representations of knowledge crystals. Typically, ovariables are defined at Opasnet pages, and their data and evaluated output are stored to Opasnet server. There are also special user interface tools to enable user inputs before an R code is run on an Opasnet page; for further instructions, see http://en.opasnet.org/w/R-tools. However, ovariables can be used independently for building modular assessment models without any connection to Opasnet.
#pick an endpoint
#make decision variables for any upstream variables (this means that you create new scenarios with particular deviations from the actual or business-as-usual answer of that variable)
#evaluate endpoint
#optimize between options defined in decisions.


{{attack|#|Recursion must be explained properly.|--[[User:Jouni|Jouni]] ([[User talk:Jouni|talk]]) 15:11, 8 January 2018 (UTC)}}
The example code shows some of the most important functionalities. Each operation is followed by an explanatory comment after # character.
:{{comment|# |I modified the description above slightly. Perhaps it is more clear.|--[[User:Teemu R|Teemu R]] ([[User talk:Teemu R|talk]]) 11:26, 14 February 2018 (UTC)}}


Other orders include: collapse of marginal columns by sums, means or sampling to reduce data size and passing input from model level without redefining the whole variable. It is also possible to redefine any specific variable before starting the recursive evaluation, in which case the recursion stops at the defined variable (dependencies are only fetched if they do not already exist; this is to avoid unnecessary computation).
install.packages("OpasnetUtils") # Install the package OpasnetUtils. This is done only once per computer.
library(OpasnetUtils) # Open the package. This is done once per R session.
objects.latest("Op_en4004", code_name="conc_mehg") # Fetch ovariables stored by code conc_mehg at Opasnet page Mercury concentrations in fish in Finland (with identifier 4004)
conc_mehg <- EvalOutput(conc_mehg) # Evaluate the output of ovariable conc_mehg (methyl mercury concentrations in fish) that was just fetched.
dat <- opbase.data("Op_en4004", subset="Kerty database") # Download data from Kerty database on the same page and put that to data.frame dat
a <- Ovariable("a", data=data.frame(Fish=c("Herring","Salmon"), Result=c(1,3))) # Define ovariable for scaling salmon results with factor 3.
mehg_scaled <- conc_mehg * a # Multiply methyl mercury concentrations by the scaling factor.


=== Opasnet Base ===
An ovariable is well defined when there is enough data, code or links to evaluate the output. Ovariables often have upstream dependencies whose output affect the output of the ovariable at hand. Therefore, ovariables are usually stored in a well defined but unevaluated format (i.e. without output). This makes it possible to use the same ovariable in different contexts, and the output varies depending on the upstream dependencies. On the other hand, it is possible to store all evaluated ovariables of a whole assessment model. This makes it possible to archive all details of a certain model version for future scrutiny.


{{attack|#|No references to Opasnet Base 1.|--[[User:Jouni|Jouni]] ([[User talk:Jouni|talk]]) 15:11, 8 January 2018 (UTC)}}
Ovariables have an efficient index handling, which makes it possible to do arithmetic operations such as sums and products in a very simple way with ovariables. The basic idea is that if the outputs of two ovariables have two columns by the same name, they are automatically merged (or joined, using the SQL vocabulary) so that rows are merged iff they have the same location values in those two columns. The same principle applies to all pairs of columns by the same name. After the merge, the arithmetic operation is performed, row by row, to the Result columns of each ovariable. This results in an intuitive handling of outputs using a short and straightforward code.


:''From [[Opasnet Base]]
Recursion is another important property of ovariables. When an ovariable is evaluated, a code checks whether it has upstream dependencies. If it does, those ovariables are fetched and evaluated first, and recursively the dependencies of those ovariables are fetched also, until all dependencies have been evaluated. Case-specific adjustments can be done to this recursion by fetching some upstream ovariables before the first ovariable is evaluated; if an upstream ovariable exists already in the global environment, the existing object is used and the respective stored object is not fetched (dependencies are only fetched if they do not already exist; this is to avoid unnecessary computation).


How to improve the existing Opasnet Base? Following issues of Opasnet Base 1 must be resolved:
'''Decisions and other upstream commands


*    Opasnet Base 1 structure makes filtering by location very slow on big data
The general idea of ovariables is such that their code should not be modified to match a specific model but rather define the knowledge crystal in question as extensively as possible under it's scope. In other words, it should answer its question in a reusable way so that the question and answer would be useful in many different situations. (Of course, this should be kept in mind already when the question is defined.) To match the scope of specific models, ovariables can be modified without changing the ovariable code by supplying commands upstream. A typical decision command is to make a new decision index with two scenarios, "business as usual" and "policy" and use the original ovariable result for business as usual and adjust the result for the policy e.g. by adding or multiplying it by a constant reflecting the impact of the policy on the ovariable. Such adjustments can be done on the assessment level without a need to change the ovariable definition in any way.
*    Opasnet Base 1 structure is perhaps unnecessarily complex = queries are hard to adapt and read
*    MySQL is not ideal for storing huge amounts of data, or at least this is how we assume?
*    MySQL tables have fixed column types = difficult to store data objects with varying types of indices into one data table
*    Multiple languages (localization) not supported


'''Answer
Evaluating a latent ovariable triggers first the evaluation of its unevaluated parent ovariables (listed in dependencies) since their results are needed to evaluate the child. This chain of evaluation calls forms a recursion tree in which each upstream variable is evaluated exactly once (cyclical dependencies are not allowed). Decision commands about upstream variables are checked and applied upon their evaluation and then propagated downstream to the first variable being evaluated. For example, decisions in decision analysis can be supplied this way:
#pick an endpoint ovariable
#make decision variables for any upstream ovariables (this means that you create new scenarios with particular deviations from the actual or business-as-usual answer of that ovariable)
#evaluate endpoint ovariable
#optimize between options defined in decisions.


MySQL is good on relations but weak on dynamic big data. Let's keep the basic "scaffold" in MySQL and store the big data (locations and results) into noSQL-base. After few days of research the best candidate for noSQL-base seems to be MongoDB. Combining relational MySQL and non-relational MongoDB will be the foundation for the new Opasnet Base 2.  
Other commands include: collapse of marginal columns by sums, means or sampling to reduce data size; and passing input from model level without redefining a whole ovariable.


== Appendix 3: Tools to help in shared understanding ==
=== Opasnet Base ===


:''From [[:op_fi:Yhtäköyttä-hankkeen loppuraportti]]
Opasnet Base is a storage database for all kinds of data needed in open assessments. It may contain parameter values for models, which are typically shown as small tables on knowledge crystal pages, from which they are automatically stored to the database. It may also contain large dataset such as research datasets or population datasets of thousands or even millions of rows, and they are uploaded to the database using an importer interface. Each table has its own structure and may or may not share column names with other tables; however, if a table is directly used as data slot for an ovariable, it must have a Result column.


{{attack|#|Ahjo is not a decision support tool. It is only a decision document archive and retrieval system. And it is not open source code.|--[[User:Jouni|Jouni]] ([[User talk:Jouni|talk]]) 15:04, 21 December 2017 (UTC)}}
Technically, Opasnet Base is a noSQL database using MongoDB software. Metadata of the tables is stored in a MySQL database. This structure offers the speed, searchability, and structural flexibility that a large amount of non-standard data requires. The database also offers version control, as old versions of a data table are kept in the database when new data is uploaded.


{{comment|#|Add a Type for managing the decision process. E.g. Ahjo belongs to that category, not decision support|--[[User:Jouni|Jouni]] ([[User talk:Jouni|talk]]) 09:24, 22 December 2017 (UTC)}}
The database also contains data about model runs that have been performed at Opasnet, if objects were stored during that model run. This makes it possible to fetch objects produced by a particular code on a particular knowledge crystal page. Typically the newest version is fetched, but information about the old versions are kept as well. The objects stored are not located in MongoDB but on server files that can be accessed with a key. It is also possible to save objects in a non-public way so that the key is not stored in the database and is only given to the person who ran the code. Due to disc storage reasons, Opasnet does not guarantee that stored objects will be kept permanently; therefore, it is a good practice to store final assessment runs with all objects to another location for permanent archival.


{{comment|#|Add Open Science Framework, look at [[:en:Project Management Body of Knowledge]], [[:en:Comparison of project management software]]. Omidyar Network. RDF database should also be linked to kansalaisaloite.fi and kuntakuuleminen (?)|--[[User:Jouni|Jouni]] ([[User talk:Jouni|talk]]) 09:24, 22 December 2017 (UTC)}}
There are several ways to access database content.
* If the data is on an Opasnet page, simply go to that page, e.g. http://en.opasnet.org/w/Mercury_concentrations_in_fish_in_Finland#Data
* Use a link to the Opasnet Base interface, e.g. http://en.opasnet.org/w/Special:Opasnet_Base?id=op_en4004.mercury_in_baltic_herring
* Use a function in R:  dat <- opbase.data("Op_en4004", subset="Mercury in Baltic herring")
* Use a function in R for stored objects: objects.latest("Op_en4004", code_name="conc_mehg")


Priority is given to open source solutions and finnish examples
For further instructions, see http://en.opasnet.org/w/Opasnet_Base_UI for user interface and http://en.opasnet.org/w/Table2Base for the wiki interface of small tables.
 
== Appendix S5: Tools to help in shared understanding ==
 
There are lots of software and platforms to support decision making. Some of them have been listed here. The focus is on open source software solutions when available. Many examples come from Finland, as we have practical experience about them. The list aims to cover different functionalities and show examples rather than give an exhaustive list of all possibilities; such lists may be found from Wikipedia, e.g. https://en.wikipedia.org/wiki/Comparison_of_project_management_software. All links were accessed 1 Feb 2020.


{| {{prettytable}}
{| {{prettytable}}
|+'''Taulukko 6-1. Yleisessä päätöstuen verkkotyötilassa hyödyllisiä toiminnallisuuksia.
|+'''Table S5-1. Useful functionalities and software in open policy practice.
! Tieto&shy;tyyppi
! Item
! Toiminnallisuus tai työvaihe
! Functionality or process phase
! Olemassaoleva työkalu
! Tool or software
|----
|----
!rowspan="1"|Kokonai&shy;suuden hallinta
!rowspan="2"|Decision process
| Kokonaisvaltainen tietotuki päätösvalmistelussa.
| Information-based decision support
| Ei olemassaolevaa työkalua kaikkien toiminnallisuuksien yhteenvetämiseen. Kehitystyölle olisi tarvetta.
| There is no single tool covering the whole decision process. Development work is needed. An interesting pilot software is being developed by the city of Helsinki for comprehensively managing and evaluating their ambitious [https://ilmastovahti.hel.fi Climate Watch] and its impacts.
|----
|----
!rowspan="7"|Asia&shy;kysymykset
| Initiative
| Aloitteenteko
| Several websites for launching, editing, and signing citizen initiatives at municipality or national level: [https://www.kansalaisaloite.fi Kansalaisaloite] (Citizen Initiative), [https://www.nuortenideat.fi Nuortenideat] (Ideas of the Young), [https://www.kuntalaisaloite.fi Kuntalaisaloite] (Municipality Initiatives). Similar tools could be used also for initiatives launched by Members of Parliament or the Government.
| [http://avoinministerio.fi Avoin ministeriö], [https://www.kansalaisaloite.fi Kansalaisaloite], [https://www.nuortenideat.fi Nuortenideat], [https://www.kuntalaisaloite.fi Kuntalaisaloite]. Aloitteentekomahdollisuus kansalaisille. Vastaava järjestelmä voisi olla myös valtioneuvoston tai eduskunnan sisältä tulevien ehdotusten esittelyyn.
|----
|----
| Asianhallinta
!rowspan="6"|Substance
| Diaari, asianhallintajärjestelmät. Avoimen koodin ratkaisu puuttuu. VAHVA-hanke tuottaa tietoa ja järjestelmiä asianhallinnan tueksi.
| Content management
| Diary systems, file and content management systems. Lots of individual solutions, mostly proprietary. VAHVA project by the Finnish Government will provide knowledge and tools for content management.
|----
|----
| Tutkimustiedonhallinta
| Research data and analyses
| [https://www.csc.fi/web/guest/-/av-2 Avaa], [https://www.csc.fi/fi/-/i-4?_82_languageId=fi_FI Ida], [http://etsin.avointiede.fi/ Etsin] ja muut CSC:n tarjoamat datamanagerointityökalut auttavat hallitsemaan tutkimustiedoa datantuotannosta ja analyyseistä aina arkistointiin asti. [https://www.avoindata.fi/fi Avoin data], tietoaineistojen avoin julkaisualusta. Erityisohjelmia on lukuisia, esimerkiksi [[:en:QGIS|QGIS]] paikkatiedon käsittelyyn. Useilla tieteenaloilla on omat tutkimustietokantansa kuten ArXiv.org (fysiikan artikkeleita) ja [[:en:List of biological databases|lukuisat biologiset tietokannat]].
| [https://avaa.tdata.fi/web/avaa/etusivu AVAA], [https://openscience.fi/ida IDA], [https://www.fairdata.fi/en/ Fairdata] and other data management tools help in managing research data from an original study to archival. [https://www.avoindata.fi/en Avoin data] (open data in Finland), platform for publishing open data. [http://findikaattori.fi/en Findicator]: indicators from all sectors of the society. [https://datahub.io/ Datahub] for open data sharing. Tools for separate analysis tasks are numerous, e.g. [[:en:QGIS|QGIS]] for geographical data. Several research fields have their own research and article databases, such as [https://arxiv.org/ ArXiv.org] (articles about physics, mathematics and other fields). [[:en:List of biological databases|Several biological databases]].
|----
|----
| Julkinen keskustelu, väitteet ja lausunnot
| Public discussion, argumentation, statements
| [https://www.otakantaa.fi Otakantaa], [http://yrpri.org Yrpri.org], Facebook, Twitter, blogit, verkkomediat ja muut foorumit keskusteluun. Tiedon tarkistamisen sivustoja: [https://faktabaari.fi Faktabaari], [https://fullfact.org Fullfact], [https://fullfact.org/blog/2016/dec/need-to-know/ Need to know -projekti faktantuottamiseen]. [http://agoravoting.org Agoravoting] on avoin sähköinen äänestysjärjestelmä. [https://www.lausuntopalvelu.fi Lausuntopalvelu], joka Otakantaa-sivuston kanssa on osa tulevaa valtionhallinnon yhteistä asianhallintaa.
| [https://www.otakantaa.fi Otakantaa], Facebook, Twitter, blogs, and other social media forums for discussion. Websites for fact checking: [https://faktabaari.fi/in-english/ Factbar], [https://fullfact.org Fullfact], [https://fullfact.org/blog/2016/dec/need-to-know/ Need to know project for fact checking]. [http://agoravoting.org Agoravoting] is an open voting system. [https://www.lausuntopalvelu.fi Lausuntopalvelu] collects statements from the public and organisations related to planned legislation and Government programs in Finland. [https://unanimous.ai/what-is-si/ Swarm AI] for collective intelligence
|----
|----
| Uutiset
| News
| Uutisseuranta: avoimen koodin vaihtoehtoja esim. [[:en:CommaFeed|CommaFeed]] tai [[:en:Tiny Tiny RSS|Tiny Tiny RSS]]. Semanttiset, automatisoidut tietohaut: esim [http://www.leiki.com/urldemo?http://www.nytimes.com/2014/09/12/science/space/after-a-two-year-trek-nasa-mars-rover-reaches-its-mountain-lab.html Leiki].
| News feeds (open source) [[:en:CommaFeed|CommaFeed]], [[:en:Tiny Tiny RSS|Tiny Tiny RSS]]. Semantic, automated information searches, e.g. [http://www.leiki.com/urldemo?http://www.nytimes.com/2014/09/12/science/space/after-a-two-year-trek-nasa-mars-rover-reaches-its-mountain-lab.html Leiki].
|----
|----
| Päätöstilanteiden ja syysuhteiden kuvaus ja arviointi
| Description and assessment of decision situations and relevant causal connections
| [[Opasnet]] [[Avoin arviointi|avointen arviointien]] ja vaikutusmallien toteuttamiseen. [[Tietokide|Tietokiteet]] mallien ja kuvausten rakenneosina. [http://sysdyn.simantics.org/ Simantics System Dynamics] semanttisiin malleihin. [[:fi:Wikidata|Wikidata]], [[:fi:Wikipedia|Wikipedia]] rakenteisen tiedon varastoina.
| [[Opasnet]] for performing [[Open assessment]]s and impact assessments. [[Knowledge crystal]]s as integral parts of models and assessments. [http://sysdyn.simantics.org/ Simantics System Dynamics] in semantic models. [https://jupyter.org/ Jupyter notebooks] for collaborative model development. [[:en:Wikidata|Wikidata]], [[:en:Wikipedia|Wikipedia]] as storages of structured data and information.
|----
|----
| Kytkennät lakeihin ja muihin päätöksiin
| Laws and regulations
| [http://data.finlex.fi Semanttinen Finlex] sisältää Suomen koko lainsäädännön ja mm. korkeimman oikeuden päätökset rakenteisessa muodossa.
| [http://data.finlex.fi/en/main Semantic Finlex] contains the whole Finnish legislation and e.g. the decisions by the Supreme Court in a semantic structure.
|----
|----
!rowspan="4"|Mene&shy;telmät
!rowspan="4"|Methods
| Asiakirjojen valmistelu, yhteiskirjoittaminen
| Preparation of documents, co-creation, real-time co-editing
| [https://dev.hel.fi/projects/openahjo/ Ahjo]-päätösvalmistelumenetelmänä ja yhteiskirjoitustyökalut, esim. Hackpad, MS Office365, Google Docs, Etherpad, Dropbox Paper, MediaWiki ja [[:en:Git|Git]], mahdollistavat valmistelun avaamisen.
| Several co-editing tools, e.g. Hackpad, MS Office365, Google Docs, Etherpad, Dropbox Paper, MediaWiki and [[:en:Git|Git]]. These tools enable the opening of the planning and writing phase of a decision. E.g. the climate strategy of Helsinki was co-created online with Google Docs and Sheets in 2018.
|----
|----
| Hyvien käytäntöjen kehittäminen ja levitys
| Development and spreading good practices
| [https://innokyla.fi Innokylä] auttaa kehittämään toimintaa nopeammin, kun voi oppia toisilta. [http://freecoin.ch Freecoin] on lohkoketjuihin perustuva palkitsemisjärjestelmä.
| [https://www.innokyla.fi/en/home InnoVillage] helps to develop practices faster, when everyone's guidance is available online and can be commented.  
|----
|----
| Keskustelun ja tietojen jäsennysmenetelmät
| Organising systems for information and discussions
| Decentralised social networking protocol [https://www.w3.org/blog/news/archives/6785 Activitypub]
| Decentralised social networking protocol [https://www.w3.org/blog/news/archives/6785 Activitypub]
Työkaluja: [https://fullfact.org/blog/2016/aug/automated-factchecking/ faktantarkistusmenetelmä] [[:en:Compendium (software)|Compendium]]. Kommenttien ja arvojen koneellinen analysointi: [http://www.happycity.fi/ Happy City]. Sanastoja ja rakenteisen tiedon työkaluja: [[:en:Resource Description Framework|Resourse Description Framework (RDF)]], [http://www.argdf.org/source/ ArgDF] (argumentaatio-ontologia), [http://www.estrellaproject.org/?page_id=5 Legal Knowledge Interchange Format (LKIF)]. Toimivat perustana keskustelun ja tiedon tiivistämiselle, jäsentämiselle ja jakamiselle.
Tools: [https://fullfact.org/blog/2016/aug/automated-factchecking/ Full Fact automated fact checking] [[:en:Compendium (software)|Compendium]]. Vocabularies and semantic tools: [[:en:Resource Description Framework|Resourse Description Framework (RDF)]], [https://finto.fi/en/ Finto] (Finnish Thesaurus and Ontology Service), [https://doi.org/10.1007/978-3-319-12069-0_1 AIF-RDF Ontology] using Conceptual Graphics User Interface COGUI. These act as a basis for organising, condensing and spreading knowledge.
|----
| Information design, visualisations
| Interactive and static visualisations from complex data. [https://shiny.rstudio.com/ Shiny], [http://rich-iannone.github.io/DiagrammeR/index.html Diagrammer], [https://www.gapminder.org/ Gapminder], [https://www.lucify.com Lucify] [https://plot.ly Plotly] [http://js.cytoscape.org/ Cytoscape]
|----
|----
| Informaatiomuotoilu ja visualisaatiot
!rowspan="3"|Work
| Interaktiiviset ja staattiset visualisaatiot monimutkaisesta datasta. Työkaluja esim. [https://shiny.rstudio.com/ Shiny], [https://www.gapminder.org/ Gapminder], [http://lucify.com Lucify]
| Work processes in decision making, research etc: follow-up, documentation
| [https://dev.hel.fi/paatokset/ Ahjo] decision repository and [https://dev.hel.fi/projects/openahjo/ Openahjo] interface document and retrieve decisions that have been done in the city of Helsinki. [[:en:Git|Git]] enables reporting of both research and decision processes. There are several new platforms for improving science, such as [https://osf.io/ Open Science Framework] for facilitating open collaboration in research. [https://www.omidyar.com/ Omidyar Network] is a philantropic investment firm supporting e.g. governance and citizen engagement.
|----
|----
!rowspan="2"|Työt
| Co-creation, experiments, crowdsourcing
| Päätös-, tutkimus- ja muiden prosessin seuranta ja dokumentointi
| [https://www.kokeilunpaikka.fi/en/ Kokeilun paikka] promotes experiments when applicable information is needed but not available. [http://sociocracy30.org/ Sociocracy 3.0] provides learning material and principles for open collaboration in organisations of any size.
| [https://dev.hel.fi/projects/openahjo/ Ahjo-päätöksenteon hallintajärjestelmä], [http://hankegalleria.fi/ Hankegalleria] VN-TEAS-hankkeille. Projektinhallintatyökaluja: [[:en:OpenProject|OpenProject]], [[:en:Git|Git]]. Mahdollistavat tutkimus- ja päätösprosessien ristiinraportoinnin.
Solved. Tutustu ja mieti soveltuvuutta.
Fingertip. Suomalaisfirma jolla pystytään kuvaamaan koko prosessi ja näkyvää kuka teki mitä.
 
T sampo alusta. Suomalainen tutustu.
|----
|----
| Kokeilujen kehittäminen, rahoitus ja hallinta
| Project management
| [http://kokeilunpaikka.fi/info.html Kokeilun paikka] edistää kokeiluja silloin, kun tapaukseen soveltuvaa tietoa ei ole tarjolla.
| There are lots of project management software, mainly targeted for enterprise use but somewhat applicable in decision making or research. Some examples: [[:en:OpenProject|OpenProject]], [[:en:Project Management Body of Knowledge|Project Management Body of Knowledge]], [[:en:Comparison of project management software|Comparison of project management software]], [http://www.fingertip.org/ Fingertip].
|----
|----
!rowspan="1"|Osaajat
!rowspan="1"|Stakeholders
| Asiantuntijuuspalvelut
| Expert services
| [https://networkofinnovators.org/ Network of innovators], [https://www.researchgate.net/home ResearchGate] ja muut asiantuntijoiden verkostot. Kuuma linja (luku 5.5.2) auttaa löytämään asiantuntijoita tai suoraan vastauksia päätöksenteon kysymyksiin.
| [https://www.researchgate.net/home ResearchGate] [https://solved.fi/ Solved] and other expert networks.
|}
|}


== See also ==
== See also ==


* {{#l:From open assessment to shared understanding.zip}}
* [[Shared information objects in policy support]], a previous version of this manuscript
* [[:heande:User talk:Jouni#Fingetrip notes]]
* [http://bigthink.com/paul-ratner/how-to-disagree-well-7-of-the-best-and-worst-ways-to-argue 7 of the best and worst ways to argue]
* [[Shared understanding]]
* [[Shared understanding]]
* [[Structure of shared understanding]]
* [[Open policy ontology]]
* [[Open assessment]]
* [[Open assessment]]
* [[Open policy practice]]
* [[Open policy practice]]
Line 1,075: Line 1,655:
* [[:op_fi:Valassafari]].
* [[:op_fi:Valassafari]].
* [[:op_fi:Hiilineutraali Helsinki 2035]]
* [[:op_fi:Hiilineutraali Helsinki 2035]]
* [[Help:Extended causal diagram]]
* [[Insight network]]
* [[Open policy practice]]
* [[Open policy practice]]
* [[Shared understanding]]
* [[Shared understanding]]
* [[Discussion]]
* [[Discussion]]
* [[Properties of good assessment]]
* [[Evaluating performance of environmental health assessments]]
* [[Evaluating performance of environmental health assessments]]
* [[Benefit-risk assessment of Baltic herring and salmon intake]]
* [[Benefit-risk assessment of Baltic herring and salmon intake]]
Line 1,088: Line 1,667:
** [[:op_fi:Helsingin ohjelmalliset energiatehokkuus- ja ilmastotoimenpiteet ja -tavoitteet]]
** [[:op_fi:Helsingin ohjelmalliset energiatehokkuus- ja ilmastotoimenpiteet ja -tavoitteet]]
** [[:op_fi:Helsingin strategiset energiatehokkuus- ja ilmastotavoitteet]]
** [[:op_fi:Helsingin strategiset energiatehokkuus- ja ilmastotavoitteet]]
** [https://dev.hel.fi/apis/openahjo/ OpenAhjo]
** [https://dev.hel.fi/projects/openahjo/ OpenAhjo]
** [https://dev.hel.fi/about/ Hel Dev people]
** [https://dev.hel.fi/about/ Hel Dev people]
* Daniel K. Walco, Jane L. Risen. The Empirical Case for Acquiescing to Intuition [http://journals.sagepub.com/doi/10.1177/0956797617723377]
* Daniel K. Walco, Jane L. Risen. The Empirical Case for Acquiescing to Intuition [http://journals.sagepub.com/doi/10.1177/0956797617723377]
Line 1,102: Line 1,681:
* [[:File:Use of risk assessment in the society.ppt]]
* [[:File:Use of risk assessment in the society.ppt]]
* The politics of evidence revisited [https://paulcairney.wordpress.com/2018/01/04/the-politics-of-evidence-revisited/]
* The politics of evidence revisited [https://paulcairney.wordpress.com/2018/01/04/the-politics-of-evidence-revisited/]
=== Parts not used ===
* Political extremism is supported by an illusion of understanding<ref>Philip M. Fernbach, Todd Rogers, Craig R. Fox, Steven A. Sloman. (2013) Political Extremism Is Supported by an Illusion of Understanding. Psychological Science Volume: 24 issue: 6, page(s): 939-946. https://doi.org/10.1177/0956797612464058</ref>
* [[:en:Political polarization]]
* U.S. media polarization and the 2020 election: a nation divided [https://www.journalism.org/2020/01/24/u-s-media-polarization-and-the-2020-election-a-nation-divided/].
* WIRED: Psychological microtargeting could actually save politics [https://www.wired.co.uk/article/psychological-microtargeting-cambridge-analytica-facebook]
* PLOS blogs: Future of open series [https://blogs.plos.org/plos/category/future-of-open-series/]
None of the websites and tools described in this article offer a complete environment for open topic-wise scientific information production and discussion that would also support decision making. Opasnet works well for online assessments, but it is not optimised for documenting policy discussions or scientific work in real time. Climate Watch was designed to implement open policy practice in a specific context of municipality climate action plans. There are plans to generalise the functionalities for wider use base. This could be achieved by merging the functionalities of e.g. Opasnet, Open Science Framework, open data repositories, and discussion forums. Even if different tasks would happen at separate websites, they could form an integrated system (by using e.g. standard interfaces and permanent resource locations) to be used by decision makers, experts, stakeholders, and machines. Resource description framework and ontologies could be helpful in organising such a complex system.
Boundary object is a concept for managing information work within a heterogeneous group of participants<ref>Star SL, Griesemer JR. Institutional Ecology, 'Translations' and Boundary Objects: Amateurs and Professionals in Berkeley's Museum of Vertebrate Zoology, 1907-39. Social Studies of Science, 1989; 19 387-420.</ref>. As people come from different disciplines, they see things differently and use different words to describe things. Boundary objects are common words or concepts that are similar enough across disciplines so that they help understanding but allow specific interpretations within disciplines or by individuals. Several dioxin-related knowledge crystals were successfully used as boundary objects in BONUS GOHERR project (Table S1-1) to produce shared understanding among authorities, fishers, and researchers from public health, marine biology, and social sciences.<ref>GOHERR VIITE työpajapaperi##</ref>
Shared understanding aims to bring different views together. This is something that is needed especially during this time of polarisation<ref>Pew Research Center. (2020). U.S. Media Polarization and the 2020 Election: A Nation Divided. https://www.journalism.org/2020/01/24/u-s-media-polarization-and-the-2020-election-a-nation-divided/</ref>. The open assessments performed have identified more agreements even about heated topics that what seems to be case based on social media. The pneumococcus case is an example of this.
Presenting also controversial and unpopular ideas is a prerequisite for a complete shared understanding. Thus, a community producing shared understanding should cherish and respect such activity and promote contributions even if they are incompatible with scientific or another paradigms. This is challenging to both someone presenting such claims and someone else personally against the presented idea. It helps if all parties have faith in the process and its capability to produce fair conclusions<ref>Rodriguez‐Sanchez C, Schuitema G, Claudy M, Sancho‐Esper F. (2018) How trust and emotions influence policy acceptance: The case of the Irish water charges. British Journal of Social Psychology 57: 3: 610-629. https://doi.org/10.1111/bjso.12242</ref>. Therefore, the society should promote the acceptability of open decision processes, open participation, and diverse contributions. Such attitude prevails in the climate strategy of Helsinki, but it was present already five years earlier in Transport and communication strategy in digital Finland (Table S1-1).
Openness does not mean that any kind of organisation or individual is equally prone or capable of using assessment information. Such equity issues are considered as a separate question and are not dealt with in this generic examination.
Openness is crucial because a priori it is impossible to know who may have important factual information or value judgements about the topic.
Open platforms for deliberation of decisions are available (otakantaa.fi, kansalaisaloite.fi), and sharing of code is routinely done via large platforms (ubuntu.com, cran.r-project.org). Also generic online tools such as Google Drive (drive.google.com), Slack (slack.com), and others have familiarised people with online collaboration and idea that information is accessible from anywhere.
ArXiv.org is a famous example of preprint servers offering a place for publishing and discussing manuscripts before peer review<ref>Cornell University Library. arXiv.org. https://arxiv.org/. Accessed 1 Feb 2020.</ref>. Such websites, as well as open access journals, have increased during recent years as the importance of availability of scientific information has been understood. Using open data storages (ida.fairdata.fi) for research results are often required by research funders.
Aumann's agreement theorem shows that rational Bayesians demonstrates that rational agents with common knowledge of each other's beliefs cannot agree to disagree, because they necessarily end up updating their posterior with that of the other<ref>Aumann RJ. (1976) Agreeing to Disagree. The Annals of Statistics. 4 (6): 1236–1239. doi:10.1214/aos/1176343654.</ref>. In this thinking, shared understanding can be seen as an intermediate phase where the disagreements have been identified but the posteriors have not yet been updated to reflect the data that is possessed by the other person.
Acquiescence, i.e. situations where people know that their choice is irrational but they choose it anyway<ref>Walco DK, Risen, JL. The Empirical Case for Acquiescing to Intuition. PSYCHOLOGICAL SCIENCE 2017;28:12:1807-1820. doi:10.1177/0956797617723377</ref>
There is a new political movement (Liike Nyt https://liikenyt.fi/) in Finland that claims that their member of parliament will vote whatever a public online discussion concludes. This approach is potentially close to co-created policy recommendations based on shared understanding. However, they are - at least so far - not using novel information tools or concepts to synthesise public discussions. Instead, they use social media groups and online polls. VIIDEN TÄHDEN LIIKE?
Also, we hypothesise that only a few major paradigms will emerge, and those are ones whose applicability is wide and independent of the discipline. Scientific paradigm is expected to be one of them, and it will be interesting to see what else emerges. People commonly reason against some unintuitive rules of the scientific method (e.g. they try to prove a hypothesis right rather than wrong) but it is not clear whether this will cause a need to develop a paradigm for an alternative approach. It is even not clear whether people are willing to accept the idea that there could be different, competing rules for reasoning in a single assessment or decision process.
Indeed, only 7 % of people contributing to Wikipedia do it for professional reasons<ref>Pande M. Wikipedia editors do it for fun: First results of our 2011 editor survey. 2011. https://blog.wikimedia.org/2011/06/10/wikipedia-editors-do-it-for-fun-first-results-of-our-2011-editor-survey/. Accessed 1 Feb 2020.</ref>.
Omidyar Network is an organisation that gives grants to non-profit organisations and also invest in startups that promote e.g. governance and citizen engagement<ref>Omidyar Network. A world of positive returns. http://www.omidyar.com. Accessed 1 Feb 2020.</ref>. As an example, they support tools to improve discussion online with annotations<ref>Hypothesis. Annotate the web, with anyone, anywhere. https://web.hypothes.is/. Accessed 1 Feb 2020.</ref>, an objective similar to with structured discussions.
Additional references to pragma-dialectics<ref>Eemeren FH van. Reasonableness and effectiveness in argumentative discourse. Fifty contributions to the development of pragma-dialectics. Springer International Publishing Switzerland, 2015. ISBN 978-3-319-20954-8. doi:10.1007/978-3-319-20955-5</ref>.
Some experts and politicians seem to see criticism as a threat that should be pre-emptively avoided by only publishing finalised products. In contrast, agile processes publish their draft products as soon as possible and use criticism as a source of useful and relevant information.
Open Science Framework is a project that aims to increase reproducibility in science by developing structured protocols for reproducing research studies, documenting study designs and results online, and producing open source software and preprint services to support this<ref>Open Science Framework. https://osf.io/. Accessed 1 Feb 2020.</ref>. The Framework maintains a web-workspace for documenting research as it unfolds rather than only afterwards in articles.
Our own experience is the same, and we have not seen hijacking, malevolent behaviour or low-quality junk contributions. However, some robots produce unrelated advertisement material at Opasnet pages, but that is easy to identify and remove, and it has not become a problem.
'''TO DO
* A specific link should be available to mean the object itself rather than its description. http://en.opasnet.org/entity/...?
* All terms and principles should be described at Opasnet at their own pages. Use italics to refer to these pages.
* Upload Tuomisto 1999 thesis to Julkari. And Paakkila 1999
Suggested editors: Daniel Angus, Özlem Uzuner, Sergio Villamayor Tomás, Frédéric Mertens

Latest revision as of 04:45, 17 March 2022


From insight network to open policy practice: practical experiences is a manuscript of a scientific article. The main point is to offer a comprehensive summary of the methods developed at THL/environmental health to support informed societal decision making, and evaluate their use and usability in practical examples in 2006-2018. The manuscript was published in 2020:

Tuomisto, J.T., Pohjola, M.V. & Rintala, T. From insight network to open policy practice: practical experiences. Health Res Policy Sys 18, 36 (2020). https://doi.org/10.1186/s12961-020-00547-3

Title page

From insight network to open policy practice: practical experiences

Short title: From insight network to open policy practice

Jouni T. Tuomisto1* ORCID 0000-0002-9988-1762, Mikko Pohjola1,2 0000-0001-9006-6510, Teemu Rintala1,3 ORCID 0000-0003-1849-235X.

1 Finnish Institute for Health and Welfare, Kuopio, Finland

2 Kisakallio Sport Institute, Lohja, Finland

3 Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland

* Corresponding author

Email: jouni.tuomisto[]thl.fi


This article describes a decision support method called open policy practice. It has mostly been developed in Finnish Institute for Health and Welfare (THL, Finland) during the last 15 years. Each assessment, case study, and method has been openly described and also typically published in scientific journals. However, this is the first comprehensive summary of open policy practice as a whole (since 2007) and thus gives a valuable overview, rationale, and evaluation for several methodological choices we have made. We have combined methods from several disciplines, including toxicology, exposure sciences, impact assessment, statistical and Bayesian methods, argumentation theory, ontologies, and co-creation to produce a coherent method for scientific decision support.

The article is currently under peer review. You can read about the main topics of the article from Opasnet pages Open policy practice, Shared understanding, Open assessment, and Properties of good assessment.

Abstract

Background

Evidence-informed decision making and better use of scientific information in societal decisions has been an area of development for decades but is still topical. Decision support work can be viewed from the perspective of information collection, synthesis, and flow between decision makers, experts, and stakeholders. Open policy practice is a coherent set of methods for such work. It has been developed and utilised mostly in Finnish and European contexts.

Methods

An overview of open policy practice is given, and theoretical and practical properties are evaluated based on properties of good policy support. The evaluation is based on information from several assessments and research projects developing and applying open policy practice and the authors' practical experiences. The methods are evaluated against their capability of producing quality of content, applicability, and efficiency in policy support, as well as how well they support close interaction among participants and understanding of each other's views.

Results

The evaluation revealed that methods and online tools work as expected, as demonstrated by the assessments and policy support processes conducted. The approach improves the availability of information and especially of relevant details. Experts are ambivalent about the acceptability of openness: it is an important scientific principle, but it goes against many current research and decision making practices. However, co-creation and openness are megatrends that are changing science, decision making and the society at large. Against many experts' fears, open participation has not caused problems in performing high-quality assessments. On the contrary, a key challenge is to motivate and help more experts, decision makers, and citizens to participate and share their views. Many methods within open policy practice have also been used widely in other contexts.

Conclusions

Open policy practice proved to be a useful and coherent set of methods. It guided policy processes toward more collaborative approach, whose purpose was wider understanding rather than winning a debate. There is potential for merging open policy practice with other open science and open decision process tools. Active facilitation, community building and improving the user-friendliness of the tools were identified as key solutions for improving usability of the method in the future.

Keywords
environmental health, decision support, open assessment, open policy practice, shared understanding, policy making, collaboration, evaluation, knowledge crystal, impact assessment

Background

This article describes and evaluates open policy practice, a set of methods and tools for improving evidence-informed policy making. Evidence-informed decision support has been a hot and evolving topic for a long time, and its importance is not diminishing any time soon. In this article, decision support is defined as knowledge work that is performed during the whole decision process (ideating possible actions, assessing impacts, deciding between options, implementing decisions, and evaluating outcomes) and that aims to produce better decisions and outcomes[1]. Here, "assessment of impacts" means ex ante consideration about what will happen if a particular decision is made, and "evaluation of outcomes" means ex post consideration about what did happen after a decision was implemented.

The area is complex, and the key players — decision makers, experts, and citizens or other stakeholders — all have different views on the process, their own roles in it, and how information should be used in the process. For example, researchers often think of information as a way to find the truth, while politicians see information as one of the tools to promote political agendas ultimately based on values.[2] Therefore, a successful method should provide functionalities for each of the key groups.

In the late 1970's, the focus was on scientific knowledge and an idea that political ambitions should be separated from objective assessments especially in the US. Since the 1980's, risk assessment has been a key method to assess human risks of environmental and occupational chemicals[3]. National Research Council specifically developed a process that could be used by all federal US agencies. The report emphasised the importance of scientific knowledge in decision making and scientific methods, such as critical use of data, as integral parts of assessments. Criticism based on observations and rationality is a central idea in the scientific method[4]. The report also clarified the use of causality: the purpose of an assessment is to clarify and quantify a causal path where an exposure to a chemical or other agent leads to a health risk via pathological changes described by the dose-response function of that chemical.

The approach was designed for single chemicals rather than for complex societal issues. This shortcoming was approached in another report that acknowledged this complexity and offered deliberation with stakeholders as a solution, in addition to scientific analysis[5]. An idea was to explicate the intentions of the decision maker but also those of the public. Also, mutual learning about the topic was seen important. There are models for describing facts and values in a coherent dual system[6]. However, practical assessments have found it difficult to successfully perform deliberation on a routine basis[7]. Indeed, citizens often complain that even if they have been formally listened to during a process, the processes need more openness, as their concerns have not contributed to the decisions made[8].

Western societies have shown a megatrend of increasing openness in many sectors, including decision-making and research. Openness of scientific publishing is increasing and many research funders also demand publishing of data, and research societies are starting to see the publishing of data as a scientific merit in itself[9]. It has been widely acknowledged that the current mainstream of proprietary (as contrast to open access) scientific publishing is a hindrance to spreading ideas and ultimately science[10]. Also governments have been active in opening data and statistics to wide use (data.gov.uk). Governance practices have been developed towards openness and inclusiveness, promoted by international initiatives such as Open Government Partnership (www.opengovpartnership.org).

As an extreme example, a successful hedge fund Bridgewater Associates implements radical openness and continuous criticism of all ideas presented by its workers rather than letting organisational status determine who is heard[11]. In a sense, they are implementing the scientific method in much more rigorous way than what is typically done in science.

In the early 2000's, several important books and articles were published about mass collaboration[12], wisdom of crowds[13], crowdsourcing in the government[14], and co-creation[15]. A common idea of the authors was that voluntary, self-organised groups had knowledge and capabilities that could be much more effectively harnessed in the society than what was happening at the time. Large collaborative projects have shown that in many cases, they are very effective ways to produce high-quality information, as long as quality control systems are functional. In software development, Linux operating system, Git software, and Github platform are examples of this. Also Wikipedia, the largest and most used encyclopedia in the world, has demonstrated that self-organised groups can indeed produce high-quality content[16].

The five principles of collaboration, openness, causality, criticism, and intentionality (Table 1) were seen as potentially important for environmental health assessment in Finnish Institute for Health and Welfare (THL; at that time National Public Health Institute, KTL), and they were adopted in the methodological decision support work of the Centre of Excellence for Environmental Health Risk Analysis (2002-2007). Open policy practice has been developed during the last twenty years especially to improve environmental health assessmentsa. Developers have come from several countries in projects mostly funded by EU and the Academy of Finland (see Funding and Acknowledgements).

Materials for the development, testing, and evaluation of open policy practice were collected from several sources.

Research projects about assessing environmental health risks were an important platform to develop, test, and implement assessment methods and policy practices. Important projects are listed in Funding. Especially the sixth framework programme of EU and its INTARESE and HEIMTSA projects (2005-2011) enabled active international collaboration around environmental health assessment methods.

Assessment cases were performed in research projects and in support for national or municipality decision making in Finland. Methods and tools were developed side by side with practical assessment work (Appendix S1).

Literature searches were performed to scientific and policy literature and websites. Concepts and methods similar to those in open policy practice were sought. Data was searched from Pubmed, Web of Knowledge, Google Scholar, and the Internet. In addition, a snowball method was used: found documents were used to screen their references and authors' other publications to identify new publications. Articles that describe large literature searches and their results include[1][7][17][18].

Open risk assessment workshops were organised as spin-offs of several of these projects for international doctoral students in 2007, 2008, and 2009. The workshops offered a place to share, discuss, and criticise ideas.

A master's course Decision Analysis and Risk Management (6 credit points) was organised by the University of Eastern Finland (previously University of Kuopio) in 2011, 2013, 2015, and 2017. The course taught open policy practice and tested its methods in course work.

Finally, general expertise and understanding was developed during practical experiences and long-term follow-up of international and national politics.

The development and selection of methods and tools to open policy practice has roughly followed this iterative pattern, where an idea is improved during each iteration, or sometimes rejected.

  • A need is identified for improving knowledge practices of a decision process or scientific policy support. This need typically arises from scientific literature, project work or news media.
  • A solution idea is developed in aim to tackle the need.
  • It is checked whether the idea fits logically in the current framework of open policy practice.
  • The idea is discussed in a project team to develop it further and gain acceptance.
  • A practical solution (web tool, checklist or similar) is produced.
  • The solution is piloted in an assessment or policy process.
  • The solution is added into the recommended set of methods of open policy practice.
  • The method is updated based on practical experience.

Development of open policy practice started with focus on opening the expert work in policy assessments. In 2007, this line of research produced a summary report about the new methods and tools developed to facilitate assessments[19]. Later, a wider question about open policy practiceb emerged: how to organise evidence-informed decision making in a situation where the five principles are used as the starting point? The question was challenging, especially as it was understood that societal decision making is rarely a single event, but often consists of several interlinked decisions at different time points and sometimes by several decision-making bodies. Therefore, it was seen more as a leadership guidance rather than advice about a single decision.

This article gives the first comprehensive, peer-reviewed description about the current methods and tools of open policy practice since the 2007 report[19]. Case studies have been published along the way, and the key methods have been described in different articles. Also, all methods and tools have been developed online and the full material has been available at Opasnet (http://en.opasnet.org) for interested readers since each piece was first written.

The purpose of this article is to critically evaluate the performance of open policy practice. Does open policy practice have the properties of good policy support? And does it enable policy support according to the five principles in Table 1?

Table 1. Principles of open policy practice. (COCCI principles)
Principle Description
Collaboration Knowledge work is performed together in aim to produce shared information.
Openness All work and all information is openly available to anyone interested for reading and contributing all the time. If there are exceptions, these must be publicly justified.
Causality The focus is on understanding and describing the causal relations between the decision options and the intended outcomes. The aim is to predict what impacts will likely occur if a particular decision option is chosen.
Criticism All information presented can be criticised based on relevance and accordance to observations. The aim is to reject ideas, hypotheses — and ultimately decision options — that do not hold against critique.
Intentionality The decision makers explicate their objectives and decision options under consideration. Also values of other participants or stakeholders are documented and considered.

Open policy practice

Figure 1. Information flows in open policy practice. Open assessments and web-workspaces have an important role as information hubs. They collect relevant information for particular decision processes and organise and synthesise it into useful formats especially for decision makers but also for anyone. The information hub works more effectively if all stakeholders contribute to one place, or alternatively facilitators collect their contributions there.

In this section, open policy practice is described in its current state. First, an overview is given, and then each part is described in more detail.

Open policy practice is a set of methods to support and perform societal decision making in an open society, and it is the overarching concept covering all methods, tools, practices, and terms presented in this article[20]. Its theoretical foundation is on the graph theory[21] and systematic information structures. Open policy practice especially focuses on promoting the openness, flow and use of information in decision processes (Figure 1). Its purpose is to give practical guidance for the whole decision process from ideating possible actions to assessing impacts, deciding between options, implementing decisions, and finally to evaluating outcomes. It aims to be applicable to all kinds of societal decision situations in any administrative area or discipline. An ambitious objective of open policy practice is to be so effective that a citizen can observe improvements in decisions and outcomes, and so reliable that a citizen is reluctant to believe claims that are in contradiction with shared understanding produced by open policy practice.

Open policy practice is based on the five principles presented in Table 1. The principles can be met if the purpose of policy support is set to produce shared understanding (a situation where different facts, values, and disagreements related to a decision situation are understood and documented). The description of shared understanding (and consequently improved actions) is thus the main output of open policy practice (see also Figure 2). It is a product that guides the decision and is the basis for evaluation of outcomes.

This guidance is formalised as evaluation and management of the work and knowledge content during a decision process. It defines the criteria against which the knowledge process needs to be evaluated and managed. It contains methods to look at what is being done, whether the work is producing the intended knowledge and outputs, and what needs to be changed. Each task is evaluated before, during, and after the actual execution, and the work is iteratively managed based on this.

The execution of a decision process is about collecting, organising and synthesising scientific knowledge and values in order to achieve objectives by informing the decision maker and stakeholders. A key part is open assessment that typically estimates the impacts of the planned decision options. Assessment and knowledge production is also performed during the implementation and evaluation steps. Execution also contains the acts of making and implementing decisions; however, they are so case-specific processes depending on the topic, decision maker, and the societal context that they are not discussed in this article.

Figure 2. The three parts of open policy practice. The timeline goes roughly from left to right, but all work should be seen as iterative processes. Shared understanding as the main output is in the middle, expert-driven information production is a part of execution. Evaluation and management gives guidance to the execution.

Shared understanding

Shared understanding is a situation where all participants' views about a particular topic have been understood, described and documented well enough so that people can know what facts, opinions, reasonings, and values exist and what agreements and disagreements exist and why. Shared understanding is produced in collaboration by decision makers, experts, and stakeholders. Each group brings in their own knowledge and concerns. Shared understanding aims to reflect all the five principles of open policy practice. This creates requirements to the methods that can be used to produce shared understanding.

Shared understanding is always about a particular topic and produced by a particular group of participants. Depending on the participants, the results might differ, but with an increasing number of participants, it putatively approaches a shared understanding of the society as a whole. Ideally, each participant agrees that the written description correctly contains their own thinking about the topic. Participants should even be able to correctly explain what other thoughts there are and how they differ from their own. Ideally any participant can learn, understand, and explain any thought represented in the group. Importantly, there is no need to agree on things, just to agree on what the disagreements are about. Therefore, shared understanding is not the same as consensus or agreement.

Shared understanding has potentially several purposes that all aim to improve the quality of societal decisions. It helps people understand complex policy issues. It helps people see their own thoughts from a wider perspective and thus increase acceptance of decisions. It improves trust in decision makers; but it may also deteriorate trust if the actions of a decision maker are not understandable based on shared understanding. It dissects each difficult detail into separate discussions and then collects statements into an overview; this helps to allocate the time resources of participants efficiently to critical issues. It improves awareness of new ideas. It releases the full potential of the public to prepare, inform, and make decisions. How well these purposes have been fulfilled in practice in assessments are discussed in Results.

Test of shared understanding

Test of shared understanding can be used to evaluate how well shared understanding has been achieved. In a successful case, all participants of a decision process give positive answers to the questions in Table 2. In a way, shared understanding is a metric for evaluating how well decision makers have embraced the knowledge base of the decision situation.

Table 2. Test of shared understanding.
Question Who is asked?
Is all relevant and important information described? All participants of the decision processes (including knowledge gathering processes)
Are all relevant and important value judgements described? (Those of all participants, not just decision makers.)
Are the decision maker's decision criteria described?
Is the decision maker's rationale from the criteria to the decision described?

Everything that is done aims to offer better understanding about impacts of the decision related to the decision maker's objectives. However, conclusions may be sensitive to initial values, and ignoring stakeholders' views may cause trouble at a later stage. Therefore, other values in the society are also included in shared understanding.

Shared understanding may have different levels of ambition. On an easy level, shared understanding is taken as general guidance and an attitude towards other people's opinions. Main points and disagreements are summarised in writing, so that an outsider is able to understand the overall picture.

On an ambitious level, the idea of documenting all opinions and their reasonings is taken literally. Participants' views are actively elicited and tested to see whether a facilitator is able to reproduce their thought processes. The objective here is to document the thinking in such a detailed way that a participant's views on the key questions of a policy can be anticipated from the description they have given. This is done by using insight networks, knowledge crystals, and other methods (see below). Written documentation with an available and usable structure is crucial, as it allows participation without being physically present. It also spreads shared understanding to decision makers and to those who were not involved in discussions.

Good descriptions of shared understanding are able to quickly and easily incorporate new information or scenarios from the participants. They can be examined using different premises, i.e., a user should be able to quickly update the knowledge base, change the point of view, or reanalyse how the situation would look like with alternative valuations. Ideally, a user interface would allow the user to select input values with intuitive menus and sliders and would show impacts of changes instantly.

Shared understanding as the key objective gives guidance to the policy process in general. But it also creates requirements that can be described as quality criteria for the process and used to evaluate and manage the work.

Evaluation and management

Evaluation is about following and checking the plans and progress of the decisions and implementation. Management is about adjusting work and updating actions based on evaluation to ensure that objectives are reached. Several criteria were developed in open policy practice to evaluate and describe the decision support work. Their purpose is to help participants focus on the most important parts of open policy practice.

Guidance exists about crowdsourced policymaking[22], and similar ideas have been utilised in open assessment.

Properties of good policy support

There is a need to evaluate an assessment work before, during, and after it is done[17]. A key question is, what makes good policy support and what criteria should be used (see Table 3)[23].

Fulfilling all these criteria is of course not a guarantee that the outcomes of a decision will be successful. But the properties listed have been found to be important determinants of the success of decision processes. In projects utilising open policy practice, poor performance of specific properties could be linked to particular problems observed. Evaluating these properties before or during a decision process could help to analyse what exactly is wrong, as problems with such properties are by then typically visible. Thus, using this evaluation scheme proactively makes it possible to manage the decision making process towards higher quality of content, applicability, and efficiency.

Table 3. Properties of good policy support. Here, "assessment" can be viewed as a particular expert work producing a report about a specific question, or as a wider description of shared understanding about a whole policy process. Assessment work is done before, during, and after the actual decision.
Category Description Guiding questions Related principles
Quality of content Specificity, exactness and correctness of information. Correspondence between questions and answers. How exact and specific are the ideas in the assessment? How completely does the (expected) answer address the assessment question? Are all important aspects addressed? Is there something unnecessary? Openness, causality, criticism
Applicability Relevance: Correspondence between output and its intended use. How well does the assessment address the intended needs of the users? Is the assessment question good in relation to the purpose of the assessment? Collaboration, openness, criticism, intentionality
Availability: Accessibility of the output to users in terms of e.g. time, location, extent of information, extent of users. Is the information provided by the assessment available when, where and to whom is needed? Openness
Usability: Potential of the information in the output to generate understanding among its user(s) about the topic of assessment. Are the intended users able to understand what the assessment is about? Is the assessment useful for them? Collaboration, openness, causality, intentionality
Acceptability: Potential of the output being accepted by its users. Fundamentally a matter of its making and delivery, not its information content. Is the assessment (both its expected results and the way the assessment is planned to be made) acceptable to the intended users? Collaboration, openness, criticism, intentionality
Efficiency Resource expenditure of producing the assessment output either in one assessment or in a series of assessments. How much effort is needed for making the assessment? Is it worth spending the effort, considering the expected results and their applicability for the intended users? Are the assessment results useful also in some other use? Collaboration, openness

Quality of content refers to the output of an assessment, typically a report, model or summary presentation. Its quality is obviously an important property. If the facts are plain wrong, it is more likely to misguide than lead to good decisions. Specificity, exactness, and correctness describe how large the remaining uncertainties are and how close the answers probably are to the truth (compared to some golden standard). In some statistical texts, similar concepts have been called precision and accuracy, although with decision support they should be understood in a flexible rather than strictly statistical sense.[24] Coherence means that the answers given are those to the questions asked.

Applicability is an important aspect of evaluation. It looks at properties that affect how well the decision support can and will be applied. It is independent of the quality of content, i.e. despite high quality, an assessment may have very poor applicability. The opposite may also be true, as sometimes faulty assessments are actively used to promote policies. However, usability typically decreases rapidly if the target audience evaluates an assessment to be of poor quality.

Relevance asks whether a good question was asked to support decisions. Identification of good questions requires lots of deliberation between different groups, including decision makers and experts, and online forums may potentially help in this.

Availability is a more technical property and describes how easily a user can find the information when needed. A typical problem is that a potential user does not know that a piece of information exists even if it could be easily accessed.

Usability may differ from user to user, depending on e.g. background knowledge, interest, or time available to learn the content.

Acceptability is a very complex issue and most easily detectable when it fails. A common situation is that stakeholders feel that they have not been properly heard and therefore any output from decision support is perceived faulty. Doubts about the credibility of the assessor also fall into this category.

Efficiency evaluates resource use when performing an assessment or other decision support. Money and time are two common measures for this. Often it is most useful to evaluate efficiency before an assessment is started. Is it realistic to produce new important information given the resources and schedule available? If more/less resources were available, what value would be added/lost? Another aspect in efficiency is that if assessments are done openly, reuse of information becomes easier and the marginal cost and time of a new assessment decreases.

All properties of decision support, not just efficiency or quality of content, are meant to guide planning, execution, and evaluation of the whole decision support work. If they are always kept in mind, they can improve daily work.

Settings of assessments

Sometimes, a decision process or an assessment may be missing a clear understanding of what should be done and why. An assessment may even be launched in a hope that it will somehow reveal what the objectives or other important factors are. Settings of assessments (Table 4) are used to explicate these so that useful decision support can be provided[25]. Examining the sub-attributes of an assessment question can also help:

  • Research question: the actual question of an open assessment
  • Boundaries: temporal, geographical, and other limits within which the question is considered
  • Decisions and scenarios: decisions and options to assess and scenarios to consider
  • Timing: the schedule of the assessment work
  • Participants: people who will or should contribute to the assessment
  • Users and intended use: users of the final assessment report and purposes of the use
Table 4. Important settings for environmental health and other impact assessments within the context public policy making.
Attribute Guiding questions Example categories
Impacts
  • Which impacts are addressed in assessment?
  • Which impacts are the most significant?
  • Which impacts are the most relevant for decision making?
Environment, health, cost, equity
Causes
  • Which causes of impacts are recognized in assessment?
  • Which causes of impacts are the most significant?
  • Which causes of impacts are the most relevant for decision making?
Production, consumption, transport, heating, power production, everyday life
Problem owner
  • Who has the interest, responsibility and/or means to assess the issue?
  • Who actually conducts the assessment?
  • Who has the interest, responsibility and/or power to make decisions and take actions upon the issue?
  • Who are affected by the impacts?
Policy maker, industry, business, expert, consumer, public
Target users
  • Who are the intended users of assessment results?
  • Who needs the assessment results?
  • Who can make use of the assessment results?
Policy maker, industry, business, expert, consumer, public
Interaction
  • What is the degree of openness in assessment (and management)? (See Table 5.)
  • How does assessment interact with the intended use of its results? (See Table 6.)
  • How does assessment interact with other actors in its context?
Isolated, informing, participatory, joint, shared

Interaction and openness

In open policy practice, the method itself is designed to facilitate openness in all its dimensions. The dimensions of openness help to identify if and how the work deviates from the ideal of openness, so that the work can be improved in this respect (Table 5)[18].

Table 5. Dimensions of openness in decision making.
Dimension Description
Scope of participation Who is allowed to participate in the process?
Access to information What information about the issue is made available to participants?
Timing of openness When are participants invited or allowed to participate?
Scope of contribution Which aspects of the issue are participants invited or allowed to contribute to?
Impact of contribution How much are participant contributions allowed to have influence on the outcomes? How much weight is given to participant contributions?

Openness can also be examined based on how intensive it is and what kind of collaboration between decision makers, experts, and stakeholders is aimed for[7][26]. Different approaches are described in Table 6.

Table 6. Categories of interaction within the knowledge-policy interaction framework.
Category Description
Isolated Assessment and use of assessment results are strictly separated. Results are provided for intended use, but users and stakeholders can not interfere with the making of the assessment.
Informing Assessments are designed and conducted according to specified needs of intended use. Users and limited groups of stakeholders may have a minor role in providing information to the assessment, but mainly serve as recipients of assessment results.
Participatory Broader inclusion of participants is emphasized. Participation is, however, treated as an add-on alongside the actual processes of assessment and/or use of assessment results.
Joint Involvement and exchange of summary-level information among multiple actors is emphasised in scoping, management, communication, and follow-up of assessment. On the level of assessment practice, actions by different actors in different roles (assessor, manager, stakeholder) remain separate.
Shared Different actors engage in open collaboration upon determining assessment questions, seeking answers to them, and implementing answers in practice. However, the actors involved in an assessment retain their roles and responsibilities.

These evaluation methods guide the actual execution of a decision process.

Execution and open assessment

Execution is the work during a decision process, including ideating possible actions, assessing impacts, deciding between options, implementing decisions, and evaluating outcomes. Execution is guided by information produced in evaluation and management. The focus of this article is on knowledge processes that support decisions. Therefore, methods to reach or implement a decision are not discussed here.

Open assessment is a method for performing impact assessments using insight networks, knowledge crystals, and web-workspaces (see below). Open assessment is an important part of execution and the main knowledge production method in open policy practice.

An assessment aims to quantify important objectives, and especially compare differences in impacts resulting from different decision options. In an assessment, current scientific information is used to answer policy-relevant questions that inform decision makers about the impacts of different options.

Open assessments are typically performed before a decision is made (but e.g. the city of Helsinki has used both ex ante and ex post approaches with its climate strategy[27]). The focus is by necessity on expert knowledge and how to organise it, although prioritisation is only possible if the objectives and valuations of the decision maker and stakeholders are known. For a list of major open assessments, see Appendix S1.

As a research topic, open assessment attempts to answer this question: "How can factual information and value judgements be organised for improving societal decision making in a situation where open participation is allowed?" As can be seen, openness, participation, and values are taken as given premises. This was far from common practice but not completely new, when the first open assessments were performed in the early 2000's[5].

Since the beginning, the main focus has been to think about information and information flows, rather than jurisdictions, political processes, or hierarchies. So, open assessment deliberately focuses on impacts and objectives rather than questions about procedures or mandates of decision support. The premise is that if the information production and dissemination are completely open, the process can be generic, and an assessment can include information from any contributor and inform any kind of decision-making body. Of course, quality control procedures and many other issues must be functional under these conditions.

Co-creation

Co-creation is a method for producing open contents in collaboration, and in this context specifically knowledge production by self-organised groups. It is a discipline in itself[15], and guidance about how to manage and facilitate co-creation can be found elsewhere. Here, only a few key points are raised about facilitation and structured discussion.

Information has to be collected, organised, and synthesised; facilitators need to motivate and help people to share their information. This requires dedicated work and skills that are not typically available among experts nor decision makers. Co-creation also contains practices and methods, such as motivating participation, facilitating discussions, clarifying and organising argumentation, moderating contents, using probabilities and expert judgement for describing uncertainties, or developing insight networks (see below) or quantitative models. Sometimes the skills needed are called interactional expertise.

Facilitation helps people participate and interact in co-creation processes using hearings, workshops, online questionnaires, wikis, and other tools. In addition to practical tools, facilitation implements principles that have been seen to motivate participation[14]. Three are worth mentioning here, because they have been shown to significantly affect the motivation to participate.

  • Grouping: Facilitation methods are used to promote the participants' feeling of being important members of a group that has a meaningful, shared purpose.
  • Trust: Facilitation builds trust among people that they can safely express their ideas and concerns, and that other members of the group support participation even if they disagree on the substance.
  • Respect: Contributions are systematically evaluated according to their merit so that each participant receives the respect they deserve based on their contributions as individuals or members of a group.

Structured discussions are synthesised and reorganised discussions, where the purpose is to highlight key statements, and argumentations that lead to acceptance or rejectance of these statements. Discussions can be organised according to pragma-dialectical argumentation rules[28] or argumentation framework[29], so that arguments form a hierarchical thread pointing to a main statement or statements. Attack arguments are used to invalidate other arguments by showing that they are either untrue or irrelevant in their context; defend arguments are used to protect from attacks; and comments are used to clarify issues. For an example, see Figure S2-5 in Appendix S2 and links thereof.

The discussions can be natural discussions that are reorganised afterwards or online discussions where the structure of contributions is governed by the tools used. A test environment exists for structured argumentation[30], and Opasnet has R functions for analysing structured discussions written on wiki pages.

Insight networks

Insight networks are graphs as defined by the graph theory[21]. In an insight network, actions, objectives, and other issues are depicted with nodes, and their causal and other relations are depicted with arrows (aka edges). An example is shown in Figure 3, which describes a potential dioxin-related decision to clean up emissions from waste incineration. The logic of such a decision can be described as a chain or network of causally dependent issues: Reduced dioxin emissions to air improve air quality and dioxin deposition into the Baltic Sea; this has a favourable effect on concentrations in the Baltic herring; this reduces human exposures to dioxins via fish; and this helps to achieve an ultimate objective of reduced health risks from dioxin. Insight networks aim to facilitate understanding, analysing, and discussing complex policy issues.

Figure 3. Insight network about dioxins, Baltic fish, and health as described in the BONUS GOHERR project[31]. Decisions are shown as red rectangles, decision makers and stakeholders as yellow hexagons, decision objectives as yellow diamonds, and substantive issues as blue nodes. The relations are written on the diagram as predicates of sentences where the subject is at the tail of the arrow and the object is at the tip of the arrow. For other insight networks, see Appendix S2.

Causal modelling and causal graphs as such are old ideas, and there are various methods developed for them, both qualitative and quantitative. However, the additional ideas with insight networks were that a) also all non-causal issues can and should be linked to the causal core in some way, if they are relevant to the decision, and therefore b) they can be effectively used in clarifying one's ideas, contributing, and then communicating a whole decision situation rather than just the causal core. In other words, a participant in a policy discussion should be able to make a reasonable connection between what they are saying and some node in an insight network developed for that policy issue. If they are not able to make such a link, their point is probably irrelevant.

The first implementations of insight networks were about toxicology of dioxins[32] and restoration of a closed asbestos mine area[33]c. In the early cases, the main purpose was to give structure to discussion about and examination of an issue rather than to be a backbone for quantitative models. In later implementations, such as in the composite traffic assessment[34] or BONUS GOHERR project[31], diagrams have been used for both purposes. Most open assessments discussed later (and listed in Appendix S1) have used insight networks to structure and illustrate their content.

Knowledge crystals

Knowledge crystals are web pages where specific research questions are collaboratively answered by producing rationale with any data, facts, values, reasoning, discussion, models, or other information that is needed to convince a critical, rational reader (Table 7).

Knowledge crystals have a few distinct features. The web page of a knowledge crystal has a permanent identifier or URL and an explicit topic, or question, which does not change over time. A user may come to the same page several times and find an up-to-date answer to the same topic. The answer changes as new information becomes available, and anyone is allowed to bring in new relevant information as long as certain rules of co-creation are followed. In a sense, the answer of a knowledge crystal is never final but it is always usable.

Knowledge crystal is a practical information structure that was designed to comply with the principles of open policy practice. Open data principles are used when possible[35]. For example, openness and criticism are implemented by allowing anyone to contribute but only after critical examination. Knowledge crystals differ from open data, which contains little to no interpretation, and scientific articles, which are not updated. Rationale is the place for new information and discussions, and resolutions about new information may change the answer.

The purpose of knowledge crystals is to offer a versatile information structure for nodes in an insight network that describes a complex policy issue. They handle research questions of any topic and describe all causal and non-causal relations from other nodes (i.e. the nodes that may affect the answer of the node under scrutiny). They contain information as necessary: text, images, mathematics, or other forms, both quantitative and qualitative. They handle facts or values depending on the questions, and withstand misconceptions and fuzzy thinking as well. Finally, they are intended to be found online by anyone interested, and their main message to be understood and used even by a non-expert.

Table 7. The attributes of a knowledge crystal.
Attribute Description
Name An identifier for the knowledge crystal. Each page has a permanent, unique name and identifier or URL.
Question A research question that is to be answered. It defines the scope of the knowledge crystal. Assessments have specific sub-attributes for questions (see section Settings of assessments)
Answer An understandable and useful answer to the question. It is the current best synthesis of all available data. Typically it has a descriptive easy-to-read summary and a detailed quantitative result published as open data. An answer may contain several competing hypotheses, if they all hold against scientific critique. This way, it may include an accurate description of the uncertainty of the answer, often in a probabilistic way.
Rationale Any information that is necessary to convince a critical rational reader that the answer is credible and usable. It presents to a reader the information required to derive the answer and explains how it is formed. It may have different sub-attributes depending on the page type, some examples are listed below.
  • Data tell about direct observations (or expert judgements) about the topic.
  • Dependencies tell what is known about how upstream knowledge crystals (i.e. causal parents) affect the answer. Dependencies may describe functional or probabilistic relationships. In an insight network, dependencies are described as arrows pointing toward the knowledge crystal.
  • Calculations are an operationalisation of how to calculate or derive the answer. It uses algebra, computer code, or other explicit methods if possible.
  • Discussions are structured or unstructured discussions about the details of the substance, or about the production of substantive information. On a wiki, discussions are typically located on the talk page of the substance page.
Other In addition to attributes, it is practical to have clarifying subheadings on a knowledge crystal page. These include: See also, Keywords, References, Related files

There are different types of knowledge crystals for different uses. Variables contain substantive topics such as emissions of a pollutant, food consumption or other behaviour of an individual, or disease burden in a population (for examples, see Figure 3 and Appendix S2.) Assessments describe the information needs of particular decision situations and work processes designed to answer those needs. They may also describe whole models (consisting of variables) for simulating impacts of a decision. Methods describe specific procedures to organise or analyse information. The question of a method typically starts with "How to...". For a list of all knowledge crystal types used at Opasnet web-workspace, see Appendix S3.

Openness and collaboration are promoted by design: knowledge crystals are modular, re-usable, and readable for humans and machines. This enables their direct use in several assessment models or internet applications, which is important for the efficiency of the work. Methods are used to standardise and facilitate the work across assessments.

Open web-workspaces

Insight networks, knowledge crystals, and open assessments are information objects that were not directly applicable at any web-workspace available at the time of development. Therefore, web-workspaces have been developed specifically for open policy practice. There are two major web-workspaces for this purpose: Opasnet (designed for expert-driven open assessments) and Climate Watch (designed for evaluation and management of climate mitigation policies).

Opasnet

Opasnet is an open wiki-based web-workspace and prototype for performing open policy practice, launched in 2006. It is designed to offer functionalities and tools for performing open assessments so that most if not all work can be done openly online. Its name is a short version of Open Assessors' Network and also from Finnish word for guide, "opas". The purpose was to test and learn co-creation among environmental health experts and start opening the assessment process to interested stakeholders.

Opasnet is based on MediaWiki platform because of its open-source code, wide use and abundance of additional packages, long-term prospects, functionalities for good research practices (e.g. talk pages for meta-level discussions), and full and automatic version control. Two language versions of Opasnet exist. English Opasnet (en.opasnet.org) contains all international projects and most scientific information. Finnish Opasnet (fi.opasnet.org) contains mostly project material for Finnish projects and pages targeted for Finnish audiences. A project wiki Heande (short for Health, the Environment, and Everything) requires a password and contains information that can not (yet) be published, but the open alternatives are preferred.

Opasnet facilitates simultaneous development of theoretical framework, assessment practices, assessment work, and supporting tools. This includes e.g. information structures, assessment methods, evaluation criteria, and online software models and libraries.

For modelling functionalities, the statistical software R is used via an R–Mediawiki interface. R code can be written directly to a wiki page and run by clicking a button. The resulting objects can be stored to the server and fetched later by a different code. Complex models can be run with a web browser without installing anything. The server has automatic version control and archival of the model description, data, code, and results.

An R package OpasnetUtils is available (CRAN repository cran.r-project.org) to support knowledge crystals and impact assessment models. It contains the necessary functions and information structures. Specific functionalities facilitate reuse and explicit quantitation of uncertainties: Scenarios can be defined at a wiki page or via a model user interface, and these scenarios can then be run without changing the model code. If input values are uncertain, uncertainties are automatically propagated through the model using Monte Carlo simulation.

For data storage, Opasnet Base, a MongoDB no-sql database, is used. Each dataset must be linked to a single wiki page, which contains all the necessary descriptions and metadata about the data. Data can be uploaded to the database via a wiki page or a file uploader. The database has an open application programming interface for data retrieval.

For more details, see Appendix S4.

Climate Watch

Figure 4. System architecture of the Climate Watch web-workspace.

Climate Watch is a web-workspace primarily for evaluating and managing climate mitigation actions (Figure 4). It was originally developed in 2018-2019 by the city of Helsinki for its climate strategy. Already from the beginning, scalability was a key priority: the web-workspace was made generic enough so that it could be easily used by other municipalities in Finland and globally, and used for evaluation and management of other topics than climate mitigation.

Climate Watch is described in more detail by Ignatius and coworkers[36]. In brief, Climate Watch consists of actions that aim to reduce climate emissions, and indicators that are supposedly affected by the actions and give insights about progress. Actions and indicators are knowledge crystals, and they are causally connected, thus forming an insight network. Each action and indicator has one or more contact people who are responsible for the reporting of progress (and sometimes for actually implementing the actions).

The requirements for choosing the technologies were wide availability, ease of development, and an architecture based on open application programming interfaces or APIs. The public-facing user interface uses the NextJS framework (https://nextjs.org/). It provides support for server-side rendering and search engine optimisation which is based on the React user interface framework (https://reactjs.org/). The backend is built using the Django web framework (https://www.djangoproject.com/) which provides the contact people with an administrator user interface. The data flows to the Climate Watch interface over a GraphQL API (https://graphql.org/). GraphQL is a standard that has the most traction in the web development community because of its flexibility and performance.

Opasnet and Climate Watch have functional similarities but different technical solutions. The user interfaces for end-users and administrators in Climate Watch have similar purposes as MediaWiki in Opasnet; and while impact assessment and model development are performed by using R at Opasnet, Climate Watch uses Python, Dash, and Jupyter.

Open policy ontology

Open policy ontology is used to describe all the information structures and policy content in a systematic, coherent, and unambiguous way. The ontology is based on the concepts of open linked data and resource description framework (RDF) by the World Wide Web Consortium[37].

The ontology is based on vocabularies with specified terms and meanings. Also the relations of terms are explicit. Resource description framework is based on the idea of triples, which have three parts: subject, predicate (or relation), and object. These can be thought as sentences: an item (subject) is related to (predicate) another item or value (object), thus forming a claim. Claims can further be specified using qualifiers and backed up by references. Insight networks can be documented as triples, and a set of triples using this ontology can be visualised as diagrams of insight network. Triple databases enable wide, decentralised linking of various sources and information.

Open policy ontology (see Appendix S3) describes all information objects and terms described above, making sure that there is a relevant item type or relation to every critical piece of information that is described in an insight network, open assessment, or shared understanding. "Critical piece of information" means something that is worth describing as a separate node, so that it can be more easily found, understood, and used. A node itself may contain large amounts of information and data, but for the purpose of producing shared understanding about a particular decision, there is no need to highlight the node's internal data on an insight network.

The ontology was used with indicator production in the climate strategy of Helsinki[27] and a visualisation project of insight networks[38].

For a full description of the current vocabulary in the ontology, see Appendix S3 and Figures S2-3 and S2-4 in Appendix S2.

Novel concepts

This section presents novel concepts that have been identified as useful for a particular need and conceptually coherent with open policy practice. However, they have not been thoroughly tested in practical assessments of policy support.

Value profile is a documented list of values, preferences, and choices of a participant. Voting advice applications are online tools that ask electoral candidates about their values, world views, or decisions they would make if elected. The voters can then answer the same questions and analyse which candidates share their values. Nowadays, such applications are routinely developed by all major media houses for every national election in Finland. Thus, voting advice applications produce a kind of value profile. However, these tools are not used to collect value profiles from the public for actual decision making or between elections although such information could be used in decision support. Value profiles are mydata, i.e. data of an individual where they themself can decide who is able see and use it. This requires trusted and secure information systems.

Archetype is an internally coherent value profile of an anonymised group of people. Coherence means that when two values are in conflict, the value profile describes which one to prefer. Archetypes are published as open data describing the number of supporters but not their identities. People may support an archetype in full or by declaring partial support to some specific values. Archetypes aim to save effort in gathering value data from the public, as when archetypes are used, not everyone needs to answer all possible questions. It also increases security since there is no need to handle individual people's potentially sensitive value profiles, when open aggregated data about archetypes suffices.

Political strategy papers typically contain explicit values of that organisation, aggregated in some way from their members' individual values. The strategic values are then used in the organisation in a normative way, implying that the members should support these values in their membership roles. An archetype differs from this, because it is descriptive rather than normative and a "membership" in an archetype does not imply any rights or responsibilities. Yet, political parties could use also archetypes to describe the values of their members.

The use of archetypes is based on an assumption that although their potential number is very large, most of a population's values relevant for a particular policy can be covered with a manageable amount of archetypes. As a comparison, there are usually from two to a dozen significant political parties in a democratic country rather than hundreds. There is also research on human values showing that they can be systematically evaluated using a fairly small amount (e.g., 4, 10, or 19) of different dimensions[39].

Paradigms are collections of rules to describe inferences that participants would make from data in the system. For example, scientific paradigm has rules about criticism and a requirement that statements must be backed up by data or references. Participants are free to develop paradigms with any rules of their choosing, as long as they can be documented and operationalised within the system. For example, a paradigm may state that when in conflict, priority is given to the opinion presented by a particular authority. Hybrid paradigms are also allowed. For example, a political party may follow the scientific paradigm in most cases but when economic assessments are ambiguous, the party chooses an interpretation that emphasises the importance of an economically active state (or alternatively market approach with a passive state).

Destructive policy is a policy that a) is actually being implemented or planned, making it politically relevant, b) causes significant harm to most or all stakeholder groups, as measured using their own interests and objectives, and c) has a feasible, less harmful alternative. Societal benefits are likely to be greater if a destructive policy is identified and abandoned, compared with a situation where an assessment only focuses on showing that one good policy option is slightly better than another one.

There are a few mechanisms that may explain why destructive policies exist. First, a powerful group can dominate the policymaking to their own benefit, causing harm to others. Second, the "prisoner's dilemma" or "tragedy of commons" makes a globally optimal solution to be unoptimal for each stakeholder group, thus draining support from it. Third, the issue is so complex that the stability of the whole system is threatened by changes[40]. Advice about destructive policies may produce support for paths out of these frozen situations.

An analysis of destructive policies attempts to systematically analyse policy options and identify, describe, and motivate rejection of those that appear destructive. The tentative questions for such an analysis include the following.

  • Are there relevant policy options or practices that are not being assessed?
  • Do the policy options have externalities that are not being assessed?
  • Are there relevant priorities among stakeholders that are not being assessed?
  • Is there strong opposition against some options among the experts or stakeholders? What is the reasoning for and science behind the opposition?
  • Is there scientific evidence that an option is unable to reach the objectives or is significantly worse than another option?

The current political actions to mitigate the climate crisis are so far from the global sustainability goals that there must be some destructive policies in place. Identification of destructive policies often requires that an assessor looks out of the box and is not restricted to default research questions. In this example, such questions could be: "What is such a policy B that fulfils the objectives of the current policy A but with less climate emissions?", and "Can we reject the null hypothesis that A is better than B in the light of data and all major archetypes?" This approach has a premise that rejection is more effective than confirmation; an idea that was already presented by Karl Popper[4].

Parts of open policy practice have been used in several assessments. In this article, we will evaluate how these methods have performed.

Methods

The methods of open policy practice were critically evaluated. The open assessments performed (Appendix S1) were used as the material for evaluation. The properties of good policy support (Table 3) were used as evaluation criteria in a similar way as in a previous evaluation[23]. In addition, open policy practice as a whole was evaluated using the categories of interaction (Table 6) and the test of shared understanding (Table 2) as criteria[25]. Key questions in the evaluations were the following. Does open policy practice have the properties of good policy support? And does it enable policy support according to the five principles of open policy practice in Table 1? For each method within open policy practice, these questions were asked: In what way could the method materialise improvements in the property considered? Are there evidence or experiences showing that improvement has actually happened in practice? Has the method shown disadvantages or side effects when implemented?

Results

Different methods of open policy practice were evaluated for their potential or observed advantages and disadvantages according to the properties of good policy support. Major advantages are listed on Table 8. Some advantages, as well as disadvantages and problems, are discussed in more detail in the text. The text is organised along the properties of good policy support, categories of interaction, and test of shared understanding.

Table 8. Methods evaluated based on properties of good policy support. Background colours: white: no anticipated benefit, yellow: potential benefit, green: actual benefit observed in open policy practice materials. Numbers in parentheses refer to the assessments in Appendix S1, Table S1-1. The last row contains general suggestions to improve policy support with respect to each property.
Method Quality of content Relevance Availability Usability Acceptability Efficiency
Co-creation Participants bring new info (2, 3, 25, 26) New questions are identified during collaborative work (6, 11) Draft results raise awareness during work (2, 8, 27) Readers ask clarifying questions and learn and create understanding through collaboration Participants are committed to conclusions (2, 8, 27) Collaboration integrates communication to decision makers and stakeholders (users) into the making, which saves time and effort
Open assessment It combines functionalities of other methods and enables peer-reviewed assessment models (4, 5, 16) End-user discussions improve assessment (16, 26, 27) It is available as draft since the beginning Standard structure facilitates use (8-9) Openness was praised (3, 8, 9, 21) Scope can be widened incrementally (12-16)
Insight network It brings structure to assessment and helps causal reasoning (8, 9, 11, 16, 17) It helps and clarifies discussions between decision makers and experts (8, 9) Readers see what is excluded It helps to check whether important issues are missing
Knowledge crystal They streamline work and provide tools for quantitative assessments (e.g. 3, 23, 24) They clarify questions (1, 6) It is mostly easy to see where information should be found Summaries help to understand They make the intentionality visible by describing the assessment question Answers can be reused across assessments (12–16, 23-24)
Web-workspace Its structure supports high-quality content production when moderated (8, 9) It combines user needs and open policy practice (8, 9) It offers an easy approach to and archive of materials (16, 21, 23, 26) The user needs guided the functions developed (8) It offers a place to document shared understanding and distribute information broadly.
Structured discussion It helps to moderate discussion and discourages low-quality contributions (2, 30) It guides focus on important topics (16, 30) Threads help to focus reading User feedback has been positive: it helps to focus on key issues (8, 30) Structure discourages redundancy
Open policy ontology It gives structure to insight networks and structured discussions (8, 16, 30) Ontology clarifies issues and relations
Value profile and archetype Value profiles help to prioritise (8) Voting advice applications may offer an example Stakeholders' values are better heard Archetypes are effective summaries
Paradigm It motivates clear reasoning It systematically describes conflicting reasonings Stakeholders' reasonings are better heard It helps to analyse inferences of different groups
Analysis of destructive policies It widens the scope (3, 8) It emphasises mistakes to be avoided Focus is on everyone's problems Lessons learned can be reused in other decisions
Suggestions by open policy practice Work openly, invite criticism. Use tools and moderation to encourage high-quality contributions (Table 1.) Acknowledge the need for and potential of co-creation, discussion, and revised scoping. Invite all to policy support work. Characterize the setting (Table 4.) Design processes and information to be open from the beginning. Use open web-workspaces. (Table 5.) Invite participation from the problem owner and user groups early on. Use user feedback to visualise, clarify, and target content (Table 6.) Be open. Clarify reasoning. Acknowledge disagreements. Use the test of shared understanding (Table 2.) Combine information production, synthesis, and use to a co-creation process to save time and resources. Use shared information objects with open license, e.g. knowledge crystals.

Quality of content

Open policy practice aims at high-quality information for decision makers. One of the ideas is that openness and co-creation enable external experts to see and criticise the content at all times so that corrections can be made. Participation among decision makers, stakeholders, and experts outside an assessment team is typically less common than ideally and requires special effort. The participation has been remarkably higher in projects where special emphasis and effort has been put to dissemination and facilitation, such as the Climate Watch and the Transport and communications strategy (assessments 8 and 26 in Table S1-1). Resources should be allocated to facilitation already when planning a policy process to ensure useful co-creation.

Participation is a challenge also in Wikipedia, where only a few percent of readers ever contribute, and the fraction of active contributors is even smaller[41]. Indeed, the quality of content in Wikipedia is better in topics that are popular and have a lot of contributors.

Active participation did not solve quality control on behalf of the assessors, and it had to be taken care of by usual means. In any case, open policy practice does not restrict the use of common quality control methods and therefore it has at least the same potential to produce high-quality assessments as those using the common methods. The quality of open assessments has been acceptable for publishing in peer-reviewed scientific journals.

Relevance

What is relevant for a decision process can be a highly disputed topic. The shared interaction implies that stakeholders can and should participate in discussions about relevance and revision of scoping when necessary. In other words, everyone is invited to policy support work. The setting of an assessment (Table 4) helps participants to see, what the assessment is about.

The analysis of destructive policies can be used as a method to focus on critical aspects of an assessment and thus increase relevance. For example, Climate Watch has an impact assessment tool[42] that dynamically simulates the total greenhouse gas emissions of Helsinki based on scenarios provided by the user. The tool is able to demonstrate destructive policies: for example, if the emission factor of district heating production does not significantly decrease in ten years, it will be impossible to reach the emission targets of Helsinki. Thus, there are sets of solutions that could be chosen because of their appealing details but that would not reduce the emission factor. The tool explicitly demonstrates that these solutions fail to reach the objectives. It also demonstrates that the emission factor is a critical variable that must be evaluated and managed carefully to avoid destructive outcomes.

Other examples include the Helsinki energy decision assessment (assessment 3 in Table S1-1). It showed that residential wood combustion was a devastating way to heat houses in urban areas and health risks were much larger than with any other heating method. Yet, this is a popular practice in Finland, and there is clearly a need for dissemination about this destructive practice. Also, a health benefit–risk assessment showed that whatever policy is chosen with dioxins and young women, it should not reduce Baltic fish consumption in other population subgroups (assessment 16 in Table S1-1). This is because the dioxin health risk, while small, is concentrated in the population subgroup of young women, while all other subgroups would clearly benefit from increased fish intake.

Availability

The tools and web-workspaces presented in this article facilitated availability of information. In addition, many policy processes were designed in such a way that information was open from the beginning. Increased openness in the society has increased demands to make information available in situations where experts used to keep details to themselves. For example, source codes of assessment models have increasingly been made openly available, and Opasnet made that possible for these assessments.

Timing of availability is critical in a policy process, and assessment results are preferably available early on. This is a major challenge, because political processes may proceed rapidly and change focus, and quantitative assessments take time. A positive example of agility was a dioxin assessment model that had been developed in several projects during a few years (assessment 16 in Table S1-1)[31]. When European Food Safety Authority released their new estimates about dioxin impacts on sperm concentration[43], the assessment model was updated and new sperm concentration results were produced within days. This was possible because the existing dioxin model was modular and using knowledge crystals, so it was rerun after updates in just one part about sperm effects.

Availability of previous versions may be critical. Many experts were reluctant to make their texts available in draft assessments if other people were able edit them, but this fear was often alleviated by the fact that previous versions were always available if needed in Opasnet version control. Availability was also improved as information was produced in a proper format for archiving, backups were produced automatically, and it was easy to produce a snapshot of a final assessment. It was not necessary to copy information from one repository to another, but in a few cases, the final assessments were stored in external open data repositories.

In structured discussion, hierarchical threads increased availability, because a reader did not need to read further if they agreed with the topmost arguments (assessment 30 in Table S1-1). On the other hand, any thread could be individually scrutinised to the last detail if needed.

Usability

Co-creation activities demonstrated the utility of participation and feedback (assessments 6, 8, Table S1-1). Even with good substance knowledge, an assessor cannot know the aspects and concerns a decision maker may have. Usability of information was clearly improved when problem owners and user groups were invited to participate early on. User feedback proved to be very useful to visualise, clarify, and target content.

The climate strategy of Helsinki (assessment 8, Table S1-1) took the usability challenge seriously and developed Climate Watch website from scratch based on open source code modules and intensive user testing and service design. Insight networks and knowledge crystals were basic building blocks of the system architecture. It received almost exclusively positive feedback from both users and experts. Also, a lot of emphasis was put on building a user community, and city authorities, other municipalities, and research institutes have shown interest in collaboration. In contrast, Opasnet was designed as generic tool for all kinds of assessments but without an existing end-user demand. As a result, the penetration of Climate Watch has been much quicker.

Insight network provides a method to illustrate and analyse a complex decision situation, while knowledge crystals offer help in describing quantitative nuances within the nodes or arrows, such as functional or probabilistic relations or estimates. There are tools with both graphical and modelling functionalities, e.g. Hugin (Hugin Expert A/S, Aalborg, Denmark) for Bayesian belief networks and Analytica® (Lumina Decision Systems Inc, Los Gatos, CA, USA) for Monte Carlo simulation. However, these tools are designed for a single desktop user rather than for open co-creation. In addition, they have limited possibilities for adding non-causal nodes and links or free-format discussions about the topics.

Insight networks were often complex and therefore better suited for detailed expert or policy work rather than for general dissemination. Other dissemination methods were needed as well. This was true also for knowledge crystals, although page summaries helped dissemination.

A knowledge crystal is typically structured so that it starts with a summary, then describes a research question and gives a more detailed answer, and finally provides a user with relevant and increasingly detailed information in a rationale. This increased the usability of a page among different user groups. On the other hand, some people found this structure confusing as they did not expect to see all the details of an assessment. Users were unsure about the status of a knowledge crystal page and whether some information was up to date or still missing. This was because many pages were work in progress rather than finalised products. This was clarified by adding status declarations on the tops of pages. Declaring drafts as drafts helped also experts who were uncomfortable in showing their own work before it was fully complete.

Voting advice applications share properties with value profiles and archetypes, and offer material for concept development. The popularity of these applications implies that there is a societal need for value analysis and aggregation. The data has been used to understand differences between individuals and political groups in Finland. With more nuanced data, a set of archetypes can probably be developed to describe common and important values in the population. Some of them may have potential to increase in popularity and form kind of virtual parties that represent population's key values.

Value profiles and paradigms were tested on structured discussions and shared understanding descriptions (assessment 30, Table S1-1). Also Helsinki tested value profiles in prioritising the development of Climate Watch. They were found to be promising and conceptually sound ideas in this context. Data that resembles value profiles are being collected by social media companies, but the data are used to inform marketing actions, often without the individual's awareness, so they are not mydata. In contrast, the purpose of value profile data is to inform societal decisions with consent from its owner rather than nudge the voter to act according to a social media company's wishes. The recent microtargeting activities by Cambridge Analytica and AggregateIQ to use value-profile-like data proved to be very effective in influencing voting decisions[44]. Value profiles are clearly severely underutilised as a tool to inform decisions. We are not aware of systems that would collect value profile data for actual democratic policy support between elections.

Acceptability

A major factor increasing acceptability was whether the stakeholders thought that they had been given all relevant information and whether their concerns had been heard. This emphasised the need to be open and clarify reasonings of different stakeholders. It was also found important to acknowledge disagreements. The test of shared understanding (Table 2.) appeared to be a useful tool in documenting these aspects.

Experts were often reluctant to participate in open assessments because they had concerns about the acceptability of the process. They thought that expertise is not given proper weight, if open participation is allowed. They feared that strong lobbying groups hijack the process. They feared that self-organised groups produce low-quality information or even malevolent dis-information. They often demanded the final say as the ultimate quality criteria, rather than trusted that data, reasoning, and critical discussion would do a better job. In brief, experts commonly thought that it is simply easier and more efficient to produce high-quality information in closed expert groups.

In a vaccine-related assessment (2, Table S1-1), comments and critique were received from both drug industry and vaccine citizen organisations by using active facilitation, and they were all very matter-of-fact. This was interesting, as the same topics caused outrage in social media, but this was not seen on structured assessments. This was possibly because the questions asked were specific and typically required some background knowledge of the topic. Interestingly, one of the most common objections and fears against open assessment was that citizen contributions are ill-informed and malevolent. The experience with open assessments showed that they were not.

Efficiency

Open policy practice combines information production, synthesis, and use to a single co-creation endeavour covering a whole policy process. When successful, this approach saved time and resources because of parallel work and rapid feedback and guidance. However, not all open assessments were optimally designed to maximise co-creation between decision makers and experts. Rather, efficiency was typically achieved when knowledge crystals improved structure and reuse and thus saved resources in assessment modelling.

A common solution to co-operation needs seemed to be a strict division of tasks. Detailed understanding of and contributions to other groups' work and models remained low or non-existent. This was typical in large assessment projects (assessments 4, 5, 7, Table S1-1). On the other hand, most researchers were happy in their own niche and did not expect that other experts could or should learn the details of their work. Consequently, the perceived need for shared tools or open data was often low, which hindered mutual sharing, learning, and reuse.

The implementation phase of Climate Watch, which started in December 2018, involved also citizens, decision-makers, and other municipalities. It was the largest case study so far using open policy practice. It combined existing and produced new climate emission models for municipalities. A long-term objective was to collect detailed input data ideally about the whole country and offer all models to all municipalities, thus maximising reuse.

An important skill in open policy practice was to learn to identify important pieces of relevant information (such as scientific facts, publications, discussions etc.) and to add that information into a proper place in an insight network by using open policy ontology and a reasonable amount of work. The more there was user need for a piece of information, the more time was worth producing it. An ontology helped to do this in practice so that the output was understandable for both humans and computers.

Accumulation of scientific merit was a key motivator for researchers. Policy support work typically did not result in scientific articles. When researchers evaluated the efficiency of their own work, they preferred tasks that produced articles in addition to societal benefit. The same reasoning was seen with open assessments and knowledge crystals, resulting in reluctance to participate. Win-win situations could be found, if policy processes were actively developed into containing research aspects, so that new information would be produced for decision makers but also for scientific audiences.

Categories of interaction

Assessment methods have changed remarkably in forty years. During the last decades, the trend has been from isolated to more open approaches, but all categories of interaction (Table 6) are still in use[7]. The trend among the open assessments (Appendix S1) seemed also to go for more participatory processes. Enabling participation was not enough, as interaction required facilitation and active invitation of decision-makers, experts, and stakeholders. Although openness and participation were available in all the open assessments in theory, only a minority of them actually had enough resources for facilitation to realise good co-creation in practice. In the first open assessments in the early 2000's, people were not familiar even with the concepts of co-creation. In recent examples, especially in the Helsinki climate strategy (assessment 8, Table S1-1), co-creation and openness were insisted by decision-makers, civil servants, and experts alike. There was also political will to give resources for co-creation and facilitation. This resulted in actual shared interaction between all groups.

The example in Helsinki produced interest and enthusiasm in both climate activists and other municipalities. The activists started to self-organise evaluation and monitoring using Climate Watch and ask explanations from civil servants whose actions were delayed. Several municipalities expressed their interest to start using Climate Watch in their own climate work, thus indicating that they had adopted the principles of openness and collaboration. This implies that although the popularity of co-creation increased slowly during previous years, good experiences and awareness increase the rate of change, thus resulting in supra-linear progress in interaction.

Test of shared understanding

Shared understanding clarified complex issues and elicited implicit valuations and reasonings in the open assessments. It facilitated rational discussion about a decision and explicated values of stakeholders e.g. about vaccines (assessments 1, 2 in Table S1-1). It also created political pressure against options that were not well substantiated, e.g. about health effects of food (assessment 31, Table S1-1). Shared understanding was approached even when a stakeholder was ignorant of or even hostile to new insights, or not interested in participating, such as in trip aggregation assessment or health benefit-risk assessment of Baltic fish (assessments 11 and 16, Table S1-1). Then, there was an attempt to describe stakeholders' views based on what other people know about their values. Everyone's views are seen as important policy-relevant information that may inform decision making.

Shared understanding was a well accepted idea among many decision makers in Finland. This was observed in collaboration with Prime Minister's Office of Finland (assessment 27, Table S1-1). Many civil servants in ministries liked the idea that sometimes it is better to aim to understanding rather than consensus. They soon adopted the easy version of the term and started to use it in their own discussions and publications[45][46].

However, shared understanding was not unanimously accepted. Experts were often reluctant to start scientific discussions with citizens, especially if there were common or strong false beliefs about the topic among the public. In such cases, a typical argument was that the role of an expert is to inform and, if possible, suppress false beliefs rather than engage in producing common descriptions about differing views. The target seemed to be to convince the opponent rather than increase understanding among the audience.

The test of shared understanding was a useful tool to recognise when not all values, causal chains or decision makers' rationale were known and documented. Yet, lack of time or resources often prevented further facilitation, information collection, or expansion of the scope of an assessment.

Discussion

This article presents methods and tools designed for decision support. Many of them have already been successfully used, while others have been identified as important parts of open policy practice but have not been extensively tested.

The discussion is organised around the five principles of open policy practice: collaboration, openness, causality, criticism, and intentionality. The principles are looked at in the light of popularity, acceptance, and lessons learned from practical experience.

The five principles are not unique for open policy practice; on the contrary, they have been borrowed from various disciplines (for reviews, see [7][1]). The aim was to use solid principles to build a coherent set of methods that gives practical guidance to decision support. It is reassuring that many principles from the original collection[19] have increased in popularity in the society. There are also studies comparing parts of open policy practice to other existing methods[47]

The results showed that the methods connected the five principles quite well to the properties of good policy support (Table 8). Open collaboration indeed resulted in high-quality content when knowledge crystals, web-workspaces and co-creation were utilised. End-user interaction and structured discussions helped to revise scoping and content, thus improving relevance and usability. Acknowledging disagreements and producing shared understanding created acceptability. And openly shared information objects such as data and models improved availability and efficiency.

The experiences about open policy practice demonstrate that it works as expected when the participants are committed to collaborate using the methods, practices, and tools. However, there have been less participants in most open assessments than what had been hoped for. This can partly be affected by own actions, as reader and contributor numbers clearly went up with active facilitation or marketing with large media coverage and public interest. Some other reasons cannot be easily affected directly, such as inertia to change established practices or lack of scientific merit. Thus, a major long-term challenge is to build an attractive assessor community, culture, and incentives for decision support.

The GovLab in New York is an example of such activity (www.thegovlab.org). They have expert networks, training, projects, and data sources available to improve policy support. There is a need for similar tools and training designed to facilitate a change elsewhere. New practices could also be promoted by developing ways to give scientific — or political — merit and recognition more directly based on online co-creation contributions. The current publication counts and impact factors — or public votes — are very indirect measures of scientific or societal importance of the information or policies produced.

Knowledge crystals offer a collaboration forum for updating scientific understanding about a topic in a quicker and easier way than publishing scientific articles. Knowledge crystals are designed to be updated based on continuous discussion about the scientific issues (or valuations, depending on the topic) aiming to back up conclusions. In contrast, scientific articles are expected to stay permanently unchanged after publication. Articles offer little room for deliberation about the interpretation or meaning of the results after a manuscript is submitted: reviewer comments are often not published, and further discussion about an article is rare and mainly occurs only if serious problems are found. Indeed, the current scientific publishing system is poor in correcting errors via deliberation[48].

Shared understanding is difficult to achieve if the decision maker, media environment, or some political groups are indifferent about or even hostile against scientific knowledge or public values. For many interest groups, non-public lobbying, demonstrations and even spreading faulty information are attractive ways of influencing the outcome of a decision. These are problematic methods from the perspective of open policy practice, because they reduce the availability of important information in decision processes.

Further studies are needed on how open, information-based processes could be developed to be more tempting to groups that previously have preferred other methods. A key question is whether shared understanding is able to offer acceptable solutions to disagreeing parties and alleviate political conflict. Another question is whether currently under-represented groups have better visibility in such open processes. Also, more information is needed about how hostile contributions get handled, when they occur; fortunately, they were very rare in the open assessments.

There is no data about open policy practice usage in a hostile environment. Yet, open policy practice can be collaboratively used even without support from a decision maker or an important stakeholder. Although their objectives values are important for an assessment, these may be either deduced indirectly from their actions, or even directly replaced by the objectives of the society at large. Thus, open policy practice is arguably a robust set of methods that can be used to bypass non-democratic power structures and focus on the needs of the public even in a non-optimal collaboration environment.

There is still a lot to learn about using co-created information in decision making. Experiences so far have demonstrated that decision making can be more evidence-informed than what it typically is, and several tools promoting this change are available.

Openness in science is a guiding principle and current megatrend, and its importance has been accepted much more widely during recent years. Yet, the practices in research are changing slowly, and many current practices are actually in conflict with openness. For example, it is common to hide expert work until it has been finalised and published, to publish in journals where content is not freely available, and to not open the data used.

A demand to produce assessments openly and describe all reasoning and data already from the beginning was often seen as an unreasonable requirement and made experts reluctant to participate. This observation raised two opposite conclusions: either that openness should be incentivised and promoted actively in all research and expert work[9], including decision support; or that openness as an objective hinders expert work and should be rejected. The latter conclusion was strong among experts in the early open assessments, but the former one has gained popularity.

There are several initiatives to open scientific processes, such as Open Science Foundation (www.osf.io). These are likely to promote change in science at large and indirectly also in scientific support of decision making.

Among experts, causality was seen as a backbone of impact modelling. In political arenas, causal discourse was not as prominent, as it was often noticed that there was actually little solid information about the most policy-relevant causal chains, and therefore values dominated policy discussions. Climate Watch was the most ambitious endeavour in the study material to quantify all major causal connections of a climate action plan. The approach was supported by the city administration and stakeholders alike. Causal quantification created an additional resource need that was not originally budgeted. It is not yet known, how Helsinki, other cities, and research institutes will distribute the resources and tasks of causal modelling and information produced. Yet, actions in the national energy and climate plans total 260 billion euro per year in EU[49]. So, even minor improvements in the efficiency or effectiveness of climate actions would make causal assessments worthwhile.

Criticism has a central role in the scientific method. It is applied in practical situations, because rejecting poor statements is easier and more efficient than trying to prove statements true[4]. Most critique in open assessments was verbal or written discussion between participants, focussing on particular, often detailed topics. Useful information structures have been found for criticism, notably structured discussions that can target any part of an assessment (scope, data, premises, analyses, structure, results etc).

The current practices of open criticism in research are far from optimal, as criticism rarely happens. Pre-publishing peer review is almost the only time when scientific work is criticised by people outside the research group, and those are typically not open. A minute fraction of published works are criticised openly in journals; a poor work is simply not cited and subsequently forgotten. Interestingly, some administrative processes follow scientific principles better than many research processes do: for example, environmental impact assessment has a compulsory process for open criticism at both design and result phases[50].

Intentionality requires that the objectives and values of stakeholders in general and decision makers in particular are understood. In the studied assessments, some values were always identified and documented. But it was not common to systematically describe all relevant values, or even ensure that the assessed objectives were actually the most important ones for the decision maker. There is clearly a need prioritise facilitation and interaction about values.

In shared understanding, some claims were found unsubstantiated or clearly false. On the societal level, open policy practice aimed to increase political pressure against decisions based on poor ideas by explicating the problems and informing the public about them. The purpose was not to pressure individuals to reject their unsubstantiated thoughts. Personal beliefs were understood rather than threatened, because the aim was to build acceptance and facilitated contributions. However, it is not known what happens with very sensitive personal topics, because there were no such issues in the studied assessments.

Politics in western democracies is typically based on a premise that ultimately the citizens decide about things by voting. Therefore, in a sense, people can not vote "wrong". In contrast, open policy practice is based on a premise that the objectives of the citizens are the ultimate guiding principle, and it is a matter of discussion, assessment, and other information work to suggest which paths should or should not be taken to reach these objectives. This thinking is close to James Madison's ideas about democracy in Federalist 63 from 1788.[51]. In this context, people vote wrong if they vote for an option that is incapable of delivering the outcomes that they want.

If people are well-informed and have time and capability of considering different alternatives, the two premises lead to similar outcomes. However, recent policy research has shown that this prerequisite is often not met, and people can be and increasingly are being mislead, especially with modern microtargeting tools[44]. The need for protecting people and decision making from misleading information has been recognised.

Public institutions such as independent justice system, free press, and honest civil servants provide protection against misleading activities and disruptive policies. These democratic institutions have deteriorated globally and in some countries particularly, even in places with good track record[52].

Destructive policies may be an effective way to inform stakeholders in a grim societal environment. Open policy practice may not be very effective in choosing the best alternative among good ones, but it is probably more effective in identifying and rejecting poor alternatives, i.e. destructive policies, which is often more important. This is expected to result in more stable and predictable policies. It is possible to focus on disseminating information about what actions especially should not be taken, why, and how it is known. In such discourse, the message can be practical, short, clear, and rationale is available for anyone interested. Practical experiments are needed to tell, whether this could reduce the support of destructive policies among the public.

Further research is also needed to study other aspects of destructive policies: Can such policies be unambiguously recognised? Is shared understanding about them convincing enough among decision makers to change policies? Does it cause objections about science being biased and partisan? Does open policy practice prevent destructive policies from gaining political support?

Conclusions

In conclusion, open policy practice works technically as expected. Open assessments can be performed openly online. They do not fail due to reasons many people think they will, namely low quality contributions, malevolent attacks or chaos caused by too many uninformed participants; these phenomena are very rare. Shared understanding has proved to be a useful concept that guides policy processes toward more collaborative approach, whose purpose is wider understanding rather than winning.

However, open policy practice has not been adopted in expert work or decision support as expected. A key hindrance has been that the initial cost of learning and adopting new tools and practices has been higher than what an expert is willing to pay for participation in a single assessment, even if its impacts on the overall process are positive. The increased availability, acceptability, and inter-assessment efficiency have not yet been fully recognised by the scientific or policy community.

Active facilitation, community building and improving the user-friendliness of the tools were identified as key solutions in improving usability of the method in the future.

List of abbreviations

  • THL: Finnish Institute for Health and Welfare (government research institute in Finland)
  • IEHIAS: Integrated Environmental Health Impact Assessment System (a website)
  • RDF: resource description framework

Declarations

  • Ethics approval and consent to participate: Not applicable
  • Consent for publication: Not applicable
  • Availability of data and materials: The datasets generated and/or analysed during the current study are available at the Opasnet repository, http://en.opasnet.org/w/Open_policy_practice
  • Competing interests: The authors declare that they have no competing interests.
  • Funding: This work resulted from the BONUS GOHERR project (Integrated governance of Baltic herring and salmon stocks involving stakeholders, 2015-2018) that was supported by BONUS (Art 185), funded jointly by the EU, the Academy of Finland and and the Swedish Research Council for Environment, Agricultural Sciences and Spatial Planning. Previous funders of the work: Centre of Excellence for Environmental Health Risk Analysis 2002-2007 (Academy of Finland), Beneris 2006-2009 (EU FP6 Food-CT-2006-022936), Intarese 2005-2011 (EU FP6 Integrated project in Global Change and Ecosystems, project number 018385), Heimtsa 2007-2011 EU FP6 (Global Change and Ecosystems project number GOCE-CT-2006-036913-2), Plantlibra 2010-2014 (EU FP7-KBBE-2009-3 project 245199), Urgenche 2011-2014 (EU FP7 Call FP7-ENV-2010 Project ID 265114), Finmerac 2006-2008 (Finnish Funding Agency for Innovation TEKES), Minera 2010-2013 (European Regional Development Fund), Scud 2005-2010 (Academy of Finland, grant 108571), Bioher 2008-2011 (Academy of Finland, grant 124306), Claih 2009-2012 (Academy of Finland, grant 129341), Yhtäköyttä 2015-2016 (Prime Minister's Office, Finland), Ympäristöterveysindikaattori 2018 (Ministry of Social Affairs and Health, Finland). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
  • Authors' contributions: JT and MP jointly developed the open assessment method and open policy practice. JT launched Opasnet web-workspace and supervised its development. TR developed OpasnetUtils software package from an original idea by JT and implemented several assessment models. All authors participated in several assessments and discussions about methods. JT wrote the first manuscript draft based on materials from MP and TR. All authors read and approved the final manuscript.
  • Acknowledgements: We thank Einari Happonen and Juha Villman for their work on developing Opasnet, and Juha Yrjölä and Tero Tikkanen for developing Climate Watch; and Arja Asikainen, John S. Evans, Alexanda Gens, Patrycja Gradowska, Päivi Haapasaari, Sonja-Maria Ignatius, Suvi Ignatius, Matti Jantunen, Anne Knol, Sami Majaniemi, Päivi Meriläinen, Kaisa Mäkelä, Raimo Muurinen, Jussi Nissilä, Juha Pekkanen, Mia Pihlajamäki, Teemu Ropponen, Kalle Ruokolainen, Simo Sarkki, Marko Tainio, Peter Tattersall, Hanna Tuomisto, Jouko Tuomisto, Matleena Tuomisto, and Pieta Tuomisto for crucial and inspiring discussions about methods and their implementation, and promoting these ideas on several forums.

Endnotes

a This paper has its foundations on environmental health, but the idea of decision support necessarily looks at aspects seen relevant from the point of view of the decision maker, not from that of an expert in a particular field. Therefore, this article and also the method described are deliberately taking a wide view and covering all areas of expertise. However, all practical case studies have their main expertise needs in public health, and often specifically in environmental health. b Whenever this article presents a term in italic (e.g. open assessment), it indicates that there is a page at the Opasnet web-workspace describing that term and that it can be accessed using a respective link (e.g. http://en.opasnet.org/w/Open_assessment). c Insight network was originally called pyrkilo (and at some point also extended causal diagram). The word and concept pyrkilo was coined in 1997. In Finnish, pyrkilö means "an object or process that tends to produce or aims at producing certain kinds of products." The reasoning for using the word was that pyrkilo diagrams and related structured information such as models tend to improve understanding and thus decisions. The first wiki website was also called Pyrkilo, but the name was soon changed to Opasnet.

References and notes

  1. 1.0 1.1 1.2 Pohjola M. Assessments are to change the world. Prerequisites for effective environmental health assessment. Helsinki: National Institute for Health and Welfare Research 105; 2013. http://urn.fi/URN:ISBN:978-952-245-883-4. Accessed 1 Feb 2020.
  2. Jussila H. Päätöksenteon tukena vai hyllyssä pölyttymässä? Sosiaalipoliittisen tutkimustiedon käyttö eduskuntatyössä. [Supporting decision making or sitting on a shelf? The use of sociopolitical research information in the Finnish Parliament.] Helsinki: Sosiaali- ja terveysturvan tutkimuksia 121; 2012. http://hdl.handle.net/10138/35919. Accessed 1 Feb 2020. (in Finnish)
  3. National Research Council. Risk Assessment in the Federal Government: Managing the Process. Washington DC: National Academy Press; 1983.
  4. 4.0 4.1 4.2 Popper K. Conjectures and Refutations: The Growth of Scientific Knowledge, 1963, ISBN 0-415-04318-2
  5. 5.0 5.1 National Research Council. Understanding risk. Informing decisions in a democratic society. Washington DC: National Academy Press; 1996.
  6. von Winterfeldt D. Bridging the gap between science and decision making. PNAS 2013;110:3:14055-14061. http://www.pnas.org/content/110/Supplement_3/14055.full
  7. 7.0 7.1 7.2 7.3 7.4 Pohjola MV, Leino O, Kollanus V, Tuomisto JT, Gunnlaugsdóttir H, Holm F, Kalogeras N, Luteijn JM, Magnússon SH, Odekerken G, Tijhuis MJ, Ueland O, White BC, Verhagen H. State of the art in benefit-risk analysis: Environmental health. Food Chem Toxicol. 2012;50:40-55.
  8. Doelle M, Sinclair JA. (2006) Time for a new approach to public participation in EA: Promoting cooperation and consensus for sustainability. Environmental Impact Assessment Review 26: 2: 185-205 https://doi.org/10.1016/j.eiar.2005.07.013.
  9. 9.0 9.1 Federation of Finnish Learned Societies. (2020) Declaration for Open Science and Research (Finland) 2020-2025. https://avointiede.fi/fi/julistus. Accessed 1 Feb 2020
  10. Eysenbach G. Citation Advantage of Open Access Articles. PLoS Biol 2006: 4; e157. doi: 10.1371/journal.pbio.0040157
  11. Dalio R. Principles: Life and work. New York: Simon & Shuster; 2017. ISBN 9781501124020
  12. Tapscott D, Williams AD. Wikinomics. How mass collaboration changes everything. USA: Portfolio; 2006. ISBN 1591841380
  13. Surowiecki J. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. USA: Doubleday; Anchor; 2004. ISBN 9780385503860
  14. 14.0 14.1 Noveck, BS. Wiki Government - How Technology Can Make Government Better, Democracy Stronger, and Citizens More Powerful. Brookings Institution Press; 2010. ISBN 9780815702757
  15. 15.0 15.1 Mauser W, Klepper G, Rice M, Schmalzbauer BS, Hackmann H, Leemans R, Current HM. Transdisciplinary global change research: the co-creation of knowledge for sustainability. Opinion in Environmental Sustainability 2013;5:420–431; doi:10.1016/j.cosust.2013.07.001
  16. Giles J. Internet encyclopaedias go head to head. Nature 2005;438:900–901 doi:10.1038/438900a
  17. 17.0 17.1 Pohjola MV, Pohjola P, Tainio M, Tuomisto JT. Perspectives to Performance of Environment and Health Assessments and Models—From Outputs to Outcomes? (Review). Int. J. Environ. Res. Public Health 2013;10:2621-2642 doi:10.3390/ijerph10072621
  18. 18.0 18.1 Pohjola MV, Tuomisto JT. Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results. Environmental Health 2011;10:58 http://www.ehjournal.net/content/10/1/58.
  19. 19.0 19.1 19.2 Tuomisto JT, Pohjola M, editors. Open Risk Assessment. A new way of providing scientific information for decision-making. Helsinki: Publications of the National Public Health Institute B18; 2007. http://urn.fi/URN:ISBN:978-951-740-736-6.
  20. Tuomisto JT, Pohjola M, Pohjola P. Avoin päätöksentekokäytäntö voisi parantaa tiedon hyödyntämistä. [Open policy practice could improve knowledge use.] Yhteiskuntapolitiikka 2014;1:66-75. http://urn.fi/URN:NBN:fi-fe2014031821621 (in Finnish). Accessed 1 Feb 2020.
  21. 21.0 21.1 Bondy, J. A.; Murty, U. S. R. (2008). Graph Theory. Springer. ISBN 978-1-84628-969-9.
  22. Aitamurto T, Landemore H. Five design principles for crowdsourced policymaking: Assessing the case of crowdsourced off-road traffic law in Finland. Journal of Social Media for Organizations. 2015;2:1:1-19.
  23. 23.0 23.1 Sandström V, Tuomisto JT, Majaniemi S, Rintala T, Pohjola MV. Evaluating effectiveness of open assessments on alternative biofuel sources. Sustainability: Science, Practice & Policy 2014;10;1. doi:10.1080/15487733.2014.11908132 Assessment: http://en.opasnet.org/w/Biofuel_assessments. Accessed 1 Feb 2020.
  24. Cooke RM. Experts in Uncertainty: Opinion and Subjective Probability in Science. New York: Oxford University Press; 1991.
  25. 25.0 25.1 Pohjola MV. Assessment of impacts to health, safety, and environment in the context of materials processing and related public policy. In: Bassim N, editor. Comprehensive Materials Processing Vol. 8. Elsevier Ltd; 2014. pp 151–162. doi:10.1016/B978-0-08-096532-1.00814-1
  26. van Kerkhoff L, Lebel L. Linking knowledge and action for sustainable development. Annu. Rev. Environ. Resour. 2006. 31:445-477. doi:10.1146/annurev.energy.31.102405.170850
  27. 27.0 27.1 City of Helsinki. The Carbon-neutral Helsinki 2035 Action Plan. Publications of the Central Administration of the City of Helsinki 2018:4. http://carbonneutralcities.org/wp-content/uploads/2019/06/Carbon_neutral_Helsinki_Action_Plan_1503019_EN.pdf Assessment: https://ilmastovahti.hel.fi. Accessed 1 Feb 2020.
  28. Eemeren FH van, Grootendorst R. A systematic theory of argumentation: The pragma-dialectical approach. Cambridge: Cambridge University Press; 2004.
  29. Dung PM. (1995) On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence. 77 (2): 321–357. doi:10.1016/0004-3702(94)00041-X.
  30. Hastrup T. Knowledge crystal argumentation tree. https://dev.tietokide.fi/?Q10. Web tool. Accessed 1 Feb 2020.
  31. 31.0 31.1 31.2 Tuomisto JT, Asikainen A, Meriläinen P et Haapasaari P. Health effects of nutrients and environmental pollutants in Baltic herring and salmon: a quantitative benefit-risk assessment. BMC Public Health 20, 64 (2020). https://doi.org/10.1186/s12889-019-8094-1 Assessment: http://en.opasnet.org/w/Goherr_assessment, data archive: https://osf.io/brxpt/. Accessed 1 Feb 2020.
  32. Tuomisto JT. TCDD: a challenge to mechanistic toxicology [Dissertation]. Kuopio: National Public Health Institute A7; 1999.
  33. Tuomisto JT, Pekkanen J, Alm S, Kurttio P, Venäläinen R, Juuti S et al. Deliberation process by an explicit factor-effect-value network (Pyrkilo): Paakkila asbestos mine case, Finland. Epidemiol 1999;10(4):S114.
  34. Tuomisto JT; Tainio M. An economic way of reducing health, environmental, and other pressures of urban traffic: a decision analysis on trip aggregation. BMC PUBLIC HEALTH 2005;5:123. http://biomedcentral.com/1471-2458/5/123/abstract Assessment: http://en.opasnet.org/w/Cost-benefit_assessment_on_composite_traffic_in_Helsinki. Accessed 1 Feb 2020.
  35. Open Knowledge International. The Open Definition. http://opendefinition.org/. Accessed 1 Feb 2020.
  36. Ignatius S-M, Tuomisto JT, Yrjölä J, Muurinen R. (2020) From monitoring into collective problem solving: City Climate Tool. EIT Climate-KIC project: 190996 (Partner Accelerator).
  37. W3C. Resource Description Framework (RDF). https://www.w3.org/RDF/. Accessed 1 Feb 2020.
  38. Tuomisto JT. Näkemysverkot ympäristöpäätöksenteon tukena [Insight networks supporting the environmental policy making](in Finnish) Kokeilunpaikka. Website. https://www.kokeilunpaikka.fi/fi/kokeilu/nakemysverkot-ymparistopaatoksenteon-tukena. Accessed 1 Feb 2020.
  39. Schwartz SH, Cieciuch J, Vecchione M, Davidov E, Fischer R, Beierlein C, Ramos A, Verkasalo M, Lönnqvist J-E. Refining the theory of basic individual values. Journal of Personality and Social Psychology. 2012: 103; 663–688. doi: 10.1037/a0029393.
  40. Bostrom N. (2019) The Vulnerable World Hypothesis. Global Policy 10: 4: 455-476. https://doi.org/10.1111/1758-5899.12718.
  41. Wikipedia: Wikipedians. https://en.wikipedia.org/wiki/Wikipedia:Wikipedians. Accessed 1 Feb 2020
  42. Climate Watch. Impact and scenario tool. https://skenaario.hnh.fi/. Website. Accessed 1 Feb 2020.
  43. EFSA. Risk for animal and human health related to the presence of dioxins and dioxin‐like PCBs in feed and food. EFSA Journal 2018;16:5333. https://doi.org/10.2903/j.efsa.2018.5333
  44. 44.0 44.1 UK Parliament. (2019) Disinformation and 'fake news': Final report. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/179102.htm. Accessed 1 Feb 2020
  45. Dufva M, Halonen M, Kari M, Koivisto T, Koivisto R, Myllyoja J. Kohti jaettua ymmärrystä työn tulevaisuudesta [Toward a shared understanding of the future of work]. Helsinki: Prime Minister's Office: Publications of the Govenrment's analysis, assessment and research activities 33; 2017. (in Finnish) http://tietokayttoon.fi/julkaisu?pubid=18301. Accessed 1 Feb 2020.
  46. Oksanen K. Valtioneuvoston tulevaisuusselonteon 1. osa. Jaettu ymmärrys työn murroksesta [Government Report on the Future Part 1. A shared understanding of the transformation of work] Prime Minister’s Office Publications 13a; 2017. (in Finnish) http://urn.fi/URN:ISBN:978-952-287-432-0. Accessed 1 Feb 2020.
  47. Pohjola MV, Pohjola P, Paavola S, Bauters M, Tuomisto JT. (2011) Pragmatic knowledge services. Journal of Universal Computer Science 17, 472-497. https://doi.org/10.3217/jucs-017-03-0472.
  48. Allison DB, Brown AW, George BJ, Kaiser KA. Reproducibility: A tragedy of errors. Nature 2016;530:27–29. doi:10.1038/530027a
  49. European Commission. (2019) Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions united in delivering the energy union and climate action - setting the foundations for a successful clean energy transition. COM/2019/285 final https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52019DC0285. Accessed 1 Feb 2020.
  50. European Parliament. Directive 2014/52/EU of the European Parliament and of the Council of 16 April 2014 amending Directive 2011/92/EU on the assessment of the effects of certain public and private projects on the environment Text with EEA relevance. https://eur-lex.europa.eu/eli/dir/2014/52/oj Accessed 1 Feb 2020.
  51. James Fishkin. (2011) When the people speak. Democratic deliberation and public consultancy. Publisher: Oxford University Press. ISBN 978-0199604432
  52. Freedom House. (2019) Freedom in the World 2019 https://freedomhouse.org/report/freedom-world/freedom-world-2019. Accessed 1 Feb 2020.

Figures and tables

Move them here for submission.

Appendix S1: Open assessments performed

A number of open assessments have been performed in several research projects (see the funding declaration) and health assessments since 2004. Some assessments have also been done on international Kuopio Risk Assessment Workshops for doctoral students in 2007, 2008, and 2009 and on a Master's course Decision Analysis and Risk Management (6 credit points), organised by the University of Eastern Finland (previously University of Kuopio) in 2011, 2013, 2015, and 2017.

More assessments can be found at Opasnet page Category:Assessments.

Table S1-1. Some environmental health assessments performed using open assessment. References give links to both an assessment page and a scientific publication as applicable.
Topic # Assessment Year Project
Vaccine effectiveness and safety 1 Assessment of the health impacts of H1N1 vaccination[1] 2011 In-house, collaboration with Decision Analysis and Risk Management course
2 Tendering process for pneumococcal conjugate vaccine[2] 2014 In-house, collaboration with the National Vaccination Expert Group
Energy production, air pollution and climate change 3 Helsinki energy decision[3] 2015 In-house, collaboration with city of Helsinki
4 Climate change policies and health in Kuopio[4] 2014 Urgenche, collaboration with city of Kuopio
5 Climate change policies in Basel[5] 2015 Urgenche, collaboration with city of Basel
6 Availability of raw material for biodiesel production[6] 2012 Jatropha, collaboration with Neste Oil
7 Health impacts of small scale wood burning[7] 2011 Bioher, Claih
8 Climate strategy of Helsinki: Carbon neutral Helsinki 2035 action plan[8] 2018 In-house, collaboration with city of Helsinki
9 Climate mitigation of the social affairs and health sector in Finland[9] 2020 In-house, commissioned by the Prime Minister
Health, climate, and economic effects of traffic 10 Gasbus - health impacts of Helsinki bus traffic[10] 2004 Collaboration with Helsinki Metropolitan Area
11 Cost-benefit assessment on composite traffic in Helsinki[11] 2005 In-house
Risks and benefits of fish consumption 12 Benefit-risk assessment of Baltic herring in Finland[12] 2015 Collaboration with Finnish Food Safety Authority
13 Benefit-risk assessment of methyl mercury and omega-3 fatty acids in fish[13] 2009 Beneris
14 Benefit-risk assessment of fish consumption for Beneris[14] 2008 Beneris
15 Benefit-risk assessment on farmed salmon[15] 2004 In-house
16 Benefit-risk assessment of Baltic herring and salmon intake[16] 2018 BONUS GOHERR
Dioxins, fine particles 17 TCDD: A challenge to mechanistic toxicology[17] 1999 EC ENV4-CT96-0336
18 Comparative risk assessment of dioxin and fine particles[18] 2007 Beneris
Plant-based food supplements 19 Compound intake estimator[19] 2014 Plantlibra
Health and ecological risks of mining 20 Paakkila asbestos mine[20] 1999 In-house
21 Model for site-specific health and ecological assessments in mines[21] 2013 Minera
22 Risks of water from mine areas [22] 2018 Kaveri
Water safety 23 Water Guide for assessing health risks of drinking water contamination[23] 2013 Conpat
24 Bathing Water Guide for assessing health risks of bathing water contamination[24] 2019 Water Guide update
Organisational assessments 25 Analysis and discussion about research strategies or organisational changes within THL 2017 In-house
26 Transport and communication strategy in digital Finland[25] 2014 Collaboration with the Ministry of Transport and Communications of Finland
Information use in government or municipality decision support 27 Case studies: Assessment of immigrants' added value; Real-time co-editing, Fact-checking, Information design[26] 2016 Yhtäköyttä, collaboration with Prime Minister's Office
28 Evaluation of forest strategy process for Puijo, Kuopio[27] 2012 In-house
Indicator development 29 Environmental health indicators in Finland[28] 2018 Ympäristöterveysindikaattori
Structuring discussions 30 Developing and testing tools and practices for structured argumentation[29] 2019 Citizen Crystal
Food safety and diet 31 Health risks of chemical and microbial contaminants and dietary factors in food in Finland[30] 2019 Ruori, collaboration with e.g. Ministry of Agriculture and Prime Minister's Office

References for assessments

  1. Assessment: http://en.opasnet.org/w/Assessment_of_the_health_impacts_of_H1N1_vaccination. Accessed 1 Feb 2020.
  2. Assessment: http://en.opasnet.org/w/Tendering_process_for_pneumococcal_conjugate_vaccine. Accessed 1 Feb 2020.
  3. Tuomisto JT, Rintala J, Ordén P, Tuomisto HM, Rintala T. Helsingin energiapäätös 2015. Avoin arviointi terveys-, ilmasto- ja muista vaikutuksista. [Helsinki energy decision 2015. An open assessment on health, climate, and other impacts]. Helsinki: National Institute for Health and Welfare. Discussionpaper 24; 2015. http://urn.fi/URN:ISBN:978-952-302-544-8 Assessment: http://en.opasnet.org/w/Helsinki_energy_decision_2015. Accessed 1 Feb 2020.
  4. Asikainen A, Pärjälä E, Jantunen M, Tuomisto JT, Sabel CE. Effects of Local Greenhouse Gas Abatement Strategies on Air Pollutant Emissions and on Health in Kuopio, Finland. Climate 2017;5(2):43; doi:10.3390/cli5020043 Assessment: http://en.opasnet.org/w/Climate_change_policies_and_health_in_Kuopio. Accessed 1 Feb 2020.
  5. Tuomisto JT, Niittynen M, Pärjälä E, Asikainen A, Perez L, Trüeb S, Jantunen M, Künzli N, Sabel CE. Building-related health impacts in European and Chinese cities: a scalable assessment method. Environmental Health 2015;14:93. doi:10.1186/s12940-015-0082-z Assessment: http://en.opasnet.org/w/Climate_change_policies_in_Basel. Accessed 1 Feb 2020.
  6. Sandström V, Tuomisto JT, Majaniemi S, Rintala T, Pohjola MV. Evaluating effectiveness of open assessments on alternative biofuel sources. Sustainability: Science, Practice & Policy 2014;10;1. doi:10.1080/15487733.2014.11908132 Assessment: http://en.opasnet.org/w/Biofuel_assessments. Accessed 1 Feb 2020.
  7. Taimisto P, Tainio M, Karvosenoja N, Kupiainen K, Porvari P, Karppinen A, Kangas L, Kukkonen J, Tuomisto JT. Evaluation of intake fractions for different subpopulations due to primary fine particulate matter (PM2.5) emitted from domestic wood combustion and traffic in Finland. Air Quality Atmosphere and Health 2011;4:3-4:199-209. doi:10.1007/s11869-011-0138-3 Assessment: http://en.opasnet.org/w/BIOHER_assessment. Accessed 1 Feb 2020.
  8. City of Helsinki. The Carbon-neutral Helsinki 2035 Action Plan. Publications of the Central Administration of the City of Helsinki 2018:4. http://carbonneutralcities.org/wp-content/uploads/2019/06/Carbon_neutral_Helsinki_Action_Plan_1503019_EN.pdf Assessment: https://ilmastovahti.hel.fi. Accessed 1 Feb 2020.
  9. Tuomisto JT. (2020) Climate emissions of the social affairs and health sector in Finland and potential mitigation actions. Assessment: https://hnpolut.dokku.teamy.fi. Accessed 1 Feb 2020
  10. Tainio M, Tuomisto JT, Hanninen O, Aarnio P, Koistinen, KJ, Jantunen MJ, Pekkanen J. Health effects caused by primary fine particulate matter (PM2.5) emitted from buses in the Helsinki metropolitan area, Finland. RISK ANALYSIS 2005;25:1:151-160. Assessment: http://en.opasnet.org/w/Gasbus_-_health_impacts_of_Helsinki_bus_traffic. Accessed 1 Feb 2020.
  11. Tuomisto JT; Tainio M. An economic way of reducing health, environmental, and other pressures of urban traffic: a decision analysis on trip aggregation. BMC PUBLIC HEALTH 2005;5:123. http://biomedcentral.com/1471-2458/5/123/abstract Assessment: http://en.opasnet.org/w/Cost-benefit_assessment_on_composite_traffic_in_Helsinki. Accessed 1 Feb 2020.
  12. Tuomisto JT, Niittynen M, Turunen A, Ung-Lanki S, Kiviranta H, Harjunpää H, Vuorinen PJ, Rokka M, Ritvanen T, Hallikainen A. Itämeren silakka ravintona – Hyöty-haitta-analyysi. [Baltic herring as food - a benefit-risk analysis] ISBN 978-952-225-141-1. Helsinki: Eviran tutkimuksia 1; 2015 (in Finnish). Assessment: http://fi.opasnet.org/fi/Silakan_hy%C3%B6ty-riskiarvio. Accessed 1 Feb 2020.
  13. Leino O, Karjalainen AK, Tuomisto JT. Effects of docosahexaenoic acid and methylmercury on child's brain development due to consumption of fish by Finnish mother during pregnancy: A probabilistic modeling approach. Food Chem Toxicol. 2013;54:50-8. doi:10.1016/j.fct.2011.06.052. Assessment: http://en.opasnet.org/w/Benefit-risk_assessment_of_methyl_mercury_and_omega-3_fatty_acids_in_fish. Accessed 1 Feb 2020.
  14. Gradowska PL. Food Benefit-Risk Assessment with Bayesian Belief Networks and Multivariable Exposure-Response. Delft: Delft University of Technology (doctoral dissertation); 2013. https://repository.tudelft.nl/islandora/object/uuid:9ced4cb2-9809-4b58-af25-34e458e8ea23/datastream/OBJ Assessment: http://en.opasnet.org/w/Benefit-risk_assessment_of_fish_consumption_for_Beneris. Accessed 1 Feb 2020.
  15. Tuomisto JT, Tuomisto J, Tainio M, Niittynen M, Verkasalo P, Vartiainen T et al. Risk-benefit analysis of eating farmed salmon. Science 2004;305(5683):476. Assessment: http://en.opasnet.org/w/Benefit-risk_assessment_on_farmed_salmon. Accessed 1 Feb 2020.
  16. Tuomisto JT, Asikainen A, Meriläinen P et Haapasaari P. Health effects of nutrients and environmental pollutants in Baltic herring and salmon: a quantitative benefit-risk assessment. BMC Public Health 20, 64 (2020). https://doi.org/10.1186/s12889-019-8094-1 Assessment: http://en.opasnet.org/w/Goherr_assessment, data archive: https://osf.io/brxpt/. Accessed 1 Feb 2020.
  17. Tuomisto JT. TCDD: a challenge to mechanistic toxicology [Dissertation]. Kuopio: National Public Health Institute A7; 1999.
  18. Leino O, Tainio M, Tuomisto JT. Comparative risk analysis of dioxins in fish and fine particles from heavy-duty vehicles. Risk Anal. 2008;28(1):127-40. Assessment: http://en.opasnet.org/w/Comparative_risk_assessment_of_dioxin_and_fine_particles. Accessed 1 Feb 2020.
  19. Assessment: http://en.opasnet.org/w/Compound_intake_estimator. Accessed 1 Feb 2020.
  20. Tuomisto JT, Pekkanen J, Alm S, Kurttio P, Venäläinen R, Juuti S et al. Deliberation process by an explicit factor-effect-value network (Pyrkilo): Paakkila asbestos mine case, Finland. Epidemiol 1999;10(4):S114.
  21. Kauppila T, Komulainen H, Makkonen S, Tuomisto JT, editors. Metallikaivosalueiden ympäristöriskinarviointiosaamisen kehittäminen: MINERA-hankkeen loppuraportti. [Summary: Improving Environmental Risk Assessments for Metal Mines: Final Report of the MINERA Project.] Helsinki: Geology Survey Finland, Research Report 199; 2013. 223 p. ISBN 978-952-217-231-0. Assessment: http://fi.opasnet.org/fi/Minera-malli. Accessed 1 Feb 2020.
  22. Assessment: http://fi.opasnet.org/fi/Kaivosvesien_riskit_(KAVERI-malli). Accessed 1 Feb 2020.
  23. Assessment: http://en.opasnet.org/w/Water_guide. Accessed 1 Feb 2020.
  24. Assessment: http://en.opasnet.org/w/Bathing_water_guide. Accessed 1 Feb 2020.
  25. Liikenne ja viestintä digitaalisessa Suomessa. Liikenne- ja viestintäministeriön tulevaisuuskatsaus 2014 [Transport and and communication in digital Finland] Helsinki: Ministry of Transport and Communication; 2014. http://urn.fi/URN:ISBN:978-952-243-420-3 Assessment: http://fi.opasnet.org/fi/Liikenne_ja_viestint%C3%A4_digitaalisessa_Suomessa_2020. Accessed 1 Feb 2020.
  26. Tuomisto JT, Muurinen R, Paavola J-M, Asikainen A, Ropponen T, Nissilä J. Tiedon sitominen päätöksentekoon. [Binding knowledge to decision making] Helsinki: Publications of the Government's analysis, assessment and research activities 39; 2017. ISBN 978-952-287-386-6 http://tietokayttoon.fi/julkaisu?pubid=19001. Assessment: http://fi.opasnet.org/fi/Maahanmuuttoarviointi. Accessed 1 Feb 2020.
  27. Kajanus M, Ollikainen T, Partanen J, Vänskä I. Kävijätutkimukseen perustuva Puijon virkistysmetsien hoito- ja käyttösuunnitelma. [Forest strategy for recreational forests at Puijo, Kuopio, based on visitor study.] (in Finnish) Kuopion kaupunki, Metsätoimisto; 2010. http://fi.opasnet.org/fi-opwiki/images/8/8a/Puijo-loppuraportti.pdf. Assessment: http://fi.opasnet.org/fi/Puijon_metsien_k%C3%A4ytt%C3%B6suunnitelman_p%C3%A4%C3%A4t%C3%B6ksenteko Accessed 1 Feb 2020.
  28. Tuomisto JT, Asikainen A, Korhonen A, Lehtomäki H. Teemasivu ympäristöterveys [Portal: Environmental health]. A website, THL, 2018. [1]
  29. Hastrup T. Knowledge crystal argumentation tree. https://dev.tietokide.fi/?Q10. Web tool. Accessed 1 Feb 2020.
  30. Suomi J, Haario, P et al. Costs and Risk Assessment of the Health Effects of the Food System. Publications of the Government´s analysis, assessment and research activities 2019:64. http://urn.fi/URN:ISBN:978-952-287-797-0. Accessed 1 Feb 2020.

Appendix S2: Examples of insight networks

Appendix S3: Open policy ontology

Shared understanding aims at producing a description of different views, opinions, and facts related to a specific topic such as a decision process. The open policy ontology describes the information structures that are needed to document shared understanding of a complex decision situation. The purpose of the structure is to help people identify hidden premises, beliefs, and values and explicate possible discrepancies. This is expected to produce better understanding among participants.

The basic structure of a shared understanding is a network of items and relations between them. This network uses Resource description framework, which is an ontology standard used to describe many Internet contents. Items and relations (aka properties) are collectively called resources. Each item is typically of one of the types mentioned below. This information is documented using property instance of (e.g. Goherr assessment is instance of assessment).

Items are written descriptions of the actual things (people, tasks, publications, or phenomena), and on this page these descriptions rather than the actual things are discussed. Different item types have different levels of standardisation and internal structure. For example, knowledge crystals are web pages that always have headings question, answer and rationale, and the information is organised under those headings. Some other items describe e.g. statements that are free-text descriptions about how a particular thing is or should be (according to a participant), and yet some others are metadata about publications. A common feature is that all items contain information that is relevant for a decision.

In the open policy ontology, each item may have lengthy texts, graphs, analyses or even models inside them. However, the focus here is on how the items are related to each other. The actual content is often referred to as one key sentence only (description). Each item also has a unique identifier URI that is used for automatic handling of data.

The most important items are knowledge crystals and they are described here.

  • Assessment describes a particular decision situation and focuses on estimating impacts of different options. Its purpose is to support the making of that decision. Unlike other knowledge crystals, assessments typically have a defined start and end dates and they are closed after the decision is made. They also have contextually and situationally defined goals`to be able to better serve the needs of the decision makers of the decision.
  • Variable answers a particular factual or ethical question that is typically needed in one or more assessments. The answer of a variable is continually updated as new information arises, but its question remains constant in time. Variable is the basic building block of assessments. In R, variables are typically implemented using ovariable objects from OpasnetUtils package.
  • Method tells how to systematically implement a particular information task. Method is the basic building block for describing the assessment work (not reality, like variables). In practice, methods are "how-to-do" descriptions about how information should be produced, collected, analysed, or synthesised in an assessment. Typically, methods contain a software code or another algorithm to actually perform the method easily. In R, methods are typically ovariables that require some context-specific upstream information about dependencies before it can be calculated.

There are also other important classes of items:

  • Publication is any documentation that contains useful information related to a decision. Publications that are commonly used at Opasnet include encyclopedia article, lecture, nugget, and study. Other publications at Opasnet are typically uploaded as files.
    • Encyclopedia article is an object that describes a topic like in Wikipedia rather than answers a specific research question. They do not have a predefined attribute structure.
    • Lecture: Lecture contains a piece of information that is to be mediated to a defined audience and with a defined learning objective. It can also be description of a process during which the audience learns, instead of being a passive recipient of information.
    • Nugget is an object that is not editable by other people than a dedicated author (group) and is not expected to be updated once finalised. They do not have a predefined attribute structure.
    • Study describes a research study and its answers, i.e. observational or other data obtained in the study. The research questions are described as the question of the information object, and the study methods are described as the rationale of the object. Unlike in an article, introduction or discussion may be missing, and unlike in a variable, the answer and rationale of the study are more or less fixed after the work is done; this is because the interpretations of the results typically happen elsewhere, e.g. in variables for which a study contains useful information.
  • Discussion is a hierarchically structured documentation of a discussion about a defined statement or statements.
  • Stakeholder page is used to describe a person or group that is relevant for a decision or decision process; they may be an actor that has an active role in decision making or is a target of impacts. Contributors of Opasnet are described on their own user pages; other stakeholders may have their page on the main namespace.
  • Process describes elements of a decision process.
  • Action describes what, who and when should act to e.g. perform an assessment, make a decision, or implement policies.

Relations show different kinds of connections between items.

  • Causal link tells that the subject may change the object (e.g. affects, increases, decreases, prevents).
  • Participatory link describes a stakeholder's particular role related to the object (participates, negotiates, decides).
  • Operational link tells that the subject has some kind of practical relation to the object (executes, offers, tells).
  • Evaluative link tells that the subject shows preference or relevance about the object (has truthlikeness, value, popularity, finds important).
  • Referential link tells that the object is used as a reference of a kind for the subject (makes relevant; associates to; has reference, has tag, has category).
  • Argumentative link occurs between statements that defend or attack each other (attack, defend, comment).
  • Property link connects an evaluative (acceptability, usability), a logical (opposite, inverse) or set theory (has subclass, has part) property to the subject.

Item types

This ontology is specifically about decision making, and therefore actions (and decisions to act) are handled explicitly. However, any natural, social, ethical or other phenomena may relate to a decision and therefore the vocabulary has to be very generic.

Table S3-1. Item types used in open policy ontology.
Class English name Finnish name Description
resource resurssi All items and relations are resources
resource item asia Relevant pieces of information related policy making. Sometimes also refers to the real-life things that the information is about. Items are shown as nodes in insight networks.
resource relation relaatio Information about how items are connected to each other. Relations are shown as edges in insight networks.
item substance ilmiö Items about a substantive topic or phenomenon itself: What issues relate to a decision? What causal connections exist between issues? What scientific knowledge exist about the issues? What actions can be chosen? What are the impacts of these actions? What are the objectives and how can they be reached? What values and preferences exist?
item stakeholder sidosryhmä Items about people or organisations who have a particular role in a policy process, either as actors or targets of impacts: Who participates in a policy process? Who should participate? Who has necessary skills for contributing? Who has the authority to decide? Who is affected by a decision?
item process prosessi Items about doing or happening in relation with a topic, especially information about how a decision will be made): What will be decided? When will it be decided? How is the decision prepared? What political realities and restrictions exist?
item action toiminta Items about organising decision support (impact assessment, decision making, implementation, and evaluation): What tasks are needed to collect and organise necessary information? How is information work organised? How and when are decisions implemented? Actions are also important afterwards to distribute merit and evaluate the process: Who did what? How did information evolve and by whom?
item information object tieto-olio A specified structure containing information about substance, stakeholders, processes, methods, or actions.
information object knowledge crystal tietokide information object with a standardised structure and contribution rules
knowledge crystal assessment arviointi Describes a decision situation and typically provides relevant information to decision makers before the decision is made (or sometimes after the decision about its implementation or success). It is mostly about the knowledge work, i.e. tasks for decision support.
knowledge crystal variable muuttuja Describes a real-world topic that is relevant for the decision situation. It is about the substance of the topic.
knowledge crystal method metodi Describes how information should be managed or analysed so that it answers the policy-relevant questions asked. How to perform information work? What methods are available for a task? How to participate in a decision process? How to use statistical and other methods and tools? How to motivate participation? How to measure merit of contributions?
information object discussion part keskustelun osa Information object that is used to organise discussions into a specified structure. The purpose of the structure is to help validation of statements and facilitate machine learning.
information object discussion keskustelu Discussion, or structured argumentation, describes arguments about a particular statement and a synthesis about an acceptable statement. In a way, discussion is (a documentation of) a process of analysing the validity of a statement.
discussion fact discussion faktakeskustelu Discussion that can be resolved based on scientific knowledge.
discussion value discussion arvokeskustelu Discussion that can be resolved based on ethical knowledge.
discussion part statement väite Proposition claiming that something is true or ethically good. A statement may be developed in a discussion by adding and organising related argumentation (according to pragma-dialectics), or by organising premises and inference rules (according to Perelman).
statement value statement arvoväite Proposition claiming that something is ethically good, better than something else, prioritised over something, or how things should be.
statement fact statement faktaväite Proposition claiming how things are or that something is true.
value statement true value statement tosi arvoväite A statement that has not been successfully invalidated.
value statement false value statement epätosi arvoväite A statement that has been successfully invalidated.
fact statement true fact statement tosi faktaväite
fact statement false fact statement epätosi faktaväite
statement true statement tosi väite
statement false statement epätosi väite
statement opening statement avausväite A statement that is the basis for a structured discussion, a priori statement.
statement closing statement lopetusväite A statement that is the resolution of a structured discussion, a posteriori statement. Closing statement becomes an opening statement when the discussion is opened again.
opening statement fact opening statement avausfaktaväite
closing statement fact closing statement lopetusfaktaväite
opening statement value opening statement avausarvoväite
closing statement value closing stetement lopetusarvoväite
discussion part argument argumentti A statement that has also contains a relation to its target as an integral part. Due to this relation, arguments appear inside discussions and target directly or indirectly the opening statement.
discussion part argumentation väittely Hierarchical list of arguments related to a particular statement.
information object knowledge crystal part tietokideosa This is shown separately to illustrate that the objects are actually linked by has part rather than has subclass relation.
knowledge crystal part question kysymys A research question asked in a knowledge crystal. The purpose of a knowledge crystal is to answer the question.
knowledge crystal part answer vastaus An answer or set of answers to the question of a knowledge crystal, based on any relevant information and inference rules.
knowledge crystal part rationale perustelut Any data, discussions, calculations or other information needed to convince a critical rational reader that the answer of a knowledge crystal is good.
knowledge crystal part answer part vastausosa This is shown separately to illustrate that the objects are actually linked by has part rather than has subclass relation.
answer part result tulos The actual, often numerical result to the question, conditional on relevant indices.
answer part index indeksi A list of possible values for a descriptor. Typically used in describing the result of an ovariable.
answer part conclusion päätelmä In an assessment, a textual interpretation of the result. Typically a conclusion is about what decision options should or should not be rejected and why based on the result.
knowledge crystal part ovariable ovariable A practical implementation of a knowledge crystal in modelling code. Ovariable takes in relevant information about data and dependencies and calculates the result. Typically implemented in R using OpasnetUtils package and ovariable object type.
ovariable key ovariable avainovariable An ovariable that is shown on an insight network even if some parts are hidden due to practical reasons.
information object publication julkaisu Any published report, book, web page or similar permanent piece of information that can be unambiguously referenced.
publication nugget tiedomuru An object that is not editable by other people than a dedicated author (group).
substance topic aihe A description of an area of interest. It defines boundaries of a content rather than defines the content itself, which is done by statements. When the information structure is improved, a topic often develops into a question of a knowledge crystal, while a statement develops into an answer of a variable.
priority objective tavoite A desired outcome of a decision. In shared understanding description, it is a topic (or variable) that has value statements attached to it.
substance risk factor riskitekijä
substance indicator indikaattori Piece of information that describes a particular substantive item in a practical and often standard way.
indicator risk indicator riski-indikaattori Indicator about (health) risk or outcome
information object data tietoaineisto
information object graph kuvaaja Graphical representation of a piece of information. Typically is related to an information object with describes relation.
work data work tietotyö
work data use tiedon käyttö
substance priority prioriteetti
substance expense kustannus
substance health impact terveysvaikutus
stakeholder decision maker päättäjä
stakeholder public officer virkamies
stakeholder assessor arvioija
stakeholder expert asiantuntija
stakeholder citizen kansalainen
stakeholder agent toimija
action task toimenpide action to be taken when the option has been selected
action decision päätös action to be taken when the option is yet to be selected. Describes a particular event where a decision maker chooses among defined alternatives. This may also be a part of an assessment under heading Decisions and scenarios.
action work työ continuous actions of the same kind and typically independent of the decision at hand. If the decision changes work routines, the action to make this change happen is called task.
work prevention ennaltaehkäisy trying to prevent something
work treatment hoito trying to fix something when something has already happened
work support tuki work that aids in the completion of the selected option, in whatever way
method open policy practice avoin päätöksentekokäytäntö framework for planning, making, and implementing decisions
method open assessment avoin arviointi method answering this question: How can factual and value information be organised for supporting societal decision making when open participation is allowed?
method analysis analyysi
method reporting raportointi
method measurement mittaus
publication study tutkimus
publication encyclopedia article ensyklopedia-artikkeli An object that describes a topic rather than answers a specific research question.
publication lecture luento Contains a piece of information that is to be mediated to a defined audience and with a defined learning objective.
method procedure toimintamalli
method principle periaate a short generic guidance for information work to ensure that the work is done properly. They especially apply to the execution phase.
principle intentionality tavoitteellisuus See Table 3 for explanations.
principle causality syysuhteiden kuvaus
principle criticism kritiikki
principle permanent resource locations kohteellisuus
principle openness avoimuus
principle reuse uusiokäyttö
principle use of knowledge crystals tietokiteiden käyttö
principle grouping ryhmäytyminen Facilitation methods are used to promote the participants' feeling of being an important member of a group that has a meaningful purpose.
principle respect arvostus Contributions are systematically documented and their merit evaluated so that each participant receives the respect they deserve based on their contributions.
objective expense objective kustannustavoite
process step jakso one of sequential time intervals when a particular kind of work is done in decision support. In the next step, the nature of the work changes.
step impact assessment vaikutusarviointi the first step in a decision process. Helps in collecting necessary information for making a decision.
step decision making päätöksenteko the second step in a decision process. When the decision maker actually chooses between options.
step implementation toimeenpano the third step in a decision process. When the chosen option is put in action.
step evaluation evaluointi the fourth step in a decision process. When the outcomes of the implementation are evaluated.
process phase vaihe one part of a decision work process where focus is on particular issues or methods. Typically phases overlap temporally.
phase shared understanding jaettu ymmärrys documenting of all relevant views, facts, values, and opinions about a decision situation in such a way that agreements and disagreements can be understood
phase execution toteutus production of necessary information for a decision at hand
phase evaluation and management seuranta ja ohjaus ensuring that all work related to a decision will be, is, and has been done properly
phase co-creation yhteiskehittäminen helping people to participate, contribute, and become motivated about the decision work

Relation types

Relations are edges between items (or nodes). A relation I is said to be an inverse of relation R, iff, for all items subject and object, claim "subject R object" is always equal to claim "object I subject".

Table S3-2. Relation types used in open policy ontology.
Class English name Finnish name English inverse Finnish inverse Description
relation participatory link osallisuuslinkki The subject is a stakeholder that has a particular role related to an object
relation operational link toimintolinkki The subject has some kind of practical relation to the object (a fairly wide class)
relation evaluative link arvostuslinkki The subject shows preference or relevance about the object
relation referential link viitelinkki The subject is used as a reference of a kind for the object
relation argumentative link argumentaatiolinkki The subject is used as an argument to criticise the object.
relation causal link syylinkki The subject has causal effect on the object (or vice versa in the case of an inverse relation)
relation property link ominaisuuslinkki The object describes a defined property of the subject.
causal link negative causal link negatiivinen syylinkki The subject reduces or diminishes the object.
causal link positive causal link positiivinen syylinkki The subject increases or enhances the object.
negative causal link decreases vähentää is decreased by vähentyy
positive causal link increases lisää is increased by lisääntyy
negative causal link worsens huonontaa is worsened by huonontuu
positive causal link improves parantaa is improved by parantuu
negative causal link prevents estää is prevented by estyy
positive causal link enhances edistää is enhanced by edistyy
negative causal link impairs heikentää is impaired by heikentyy
positive causal link sustains ylläpitää is sustained by ylläpitäytyy
causal link affects vaikuttaa is affected by vaikuttuu
causal link indirectly affects vaikuttaa epäsuorasti indirectly affected by vaikuttuu epäsuorasti
causal link cause of syy caused by johtuu Wikidata property P1542
causal link immediate cause of välitön syy immediately caused by johtuu välittömästi Wikidata property P1536
causal link contributing factor of vaikuttava tekijä Wikidata property P1537
participatory link performs toteuttaa performer toteuttajana who does a task?
participatory link decides päättää decider päätäjänä
participatory link asks kysyy asker kysyjänä
participatory link participates osallistuu participant osallistujana
participatory link accepts hyväksyy accepted by hyväksyjänä
participatory link develops kehittää developed by kehittäjänä
participatory link proposes ehdottaa proposed by ehdottajana
participatory link answers vastaa answered by vastaajana
participatory link responsible for vastuussa responsibility of vastuullisena
participatory link negotiates neuvottelee negotiated by neuvottelijana
participatory link recommends suosittelee recommended by suosittelijana
participatory link controls kontrolloi controlled by kontrolloijana
participatory link claims väittää claimed by väittäjänä
participatory link owns omistaa owned by omistajana
participatory link does tekee done by tekijänä
participatory link maintains ylläpitää maintained by ylläpitäjänä
participatory link oversees valvoo overseen by valvojana
operational link has option omistaa vaihtoehdon option for vaihtoehtona
operational link has index omistaa indeksin index for indeksinä
operational link tells kertoo told by kertojana
operational link describes kuvaa described by kuvaajana
operational link maps kartoittaa mapped by kartjoittajana
operational link contains data sisältää dataa data contained in data sisältyy
operational link data for on datana gets data from saa datansa
operational link uses käyttää is used by on käytettävänä an input (object) for a process (subject)
operational link produces tuottaa is produced by tuottajana Object is an output of a process produced by a stakeholder (subject)
operational link provides varustaa is provided by varustajana
operational link about aiheesta a task is about a topic. This overlaps with has topic; merge them?
property link logical link looginen linkki Relations based on logic
property link set theory link joukko-oppilinkki Relations based on set theory
set theory link part of osana has part sisältää osan is a part of a bigger entity, e.g. Venus is part of Solar System. Wikidata property P361 (part of) & P527 (has part). Previously there were relations about a decision: substance of, decision process of, stakeholder of, method of, task of, irrelevant to. But these are depreciated and replaced by has part, because the class of the object makes specific relations redundant.
set theory link context for kontekstina has context omistaa kontekstin
set theory link has subclass omistaa alajoukon subclass of alajoukkona Wikidata property P279
set theory link has instance omistaa instanssin instance of instanssina Object belongs to a set defined by the subject and inherits the properties of the set. Sysnonym for has item, which is depreciated. Wikidata property P31
logical link opposite vastakohta subject is opposite of object, e.g. black is opposite of white. Wikidata property P461; it is its own inverse
logical link inverse toisinpäin a sentence is equal to another sentence where subject and object switch places and has the inverse relation. This is typically needed in preprocessing of insight networks, and it rarely is explicitly shown of graphs. Wikidata property P1696; it is its own inverse
logical link if - then jos - niin if not - then not jos ei - niin ei If subject is true, then object is true. Also the negation is possible: if - then not. This links to logical operators and, or, not, equal, exists, for all; but it is not clear how they should be used in an insight network.
operational link prepares valmistelee prepared by valmistelijana
operational link pays kustantaa paid by kustantajana
operational link rationale for perustelee has rationale perusteltuu
operational link offers tarjoaa offered by tarjoajana
operational link executes suorittaa executed by suorittajana
operational link irrelevant to epärelevantti asiassa If there is no identified relation (or chain of relations) between a subject and an object, it implies that the subject is irrelevant to the object. However, sometimes people may (falsely) think that it is relevant, and this relation is used to explicate the irrelevance.
evaluative link finds important kokee tärkeäksi is found important tärkeäksi kokijana
evaluative link makes relevant tekee relevantiksi is made relevant relevantiksi tekijänä if the subject is valid in the given context, then the object is relevant. This typically goes between arguments, from a variable to value statement or from a value statement to a fact statement. This is a synonym of 'valid defend of type relevance'.
evaluative link makes irrelevant tekee epärelevantiksi is made irrelevant epärelevantiksi tekijänä Opposite of 'makes relevant'. Synonym of 'valid attack of type relevance'.
evaluative link makes redundant tekee turhaksi is made redundant turhaksi tekijänä Everything that is said in the object is already said in the subject. This depreciates the object because it brings no added value. However, it is kept for archival reasons and to demonstrate that the statement was heard.
evaluative link has opinion on mieltä Subject (typically a stakeholder) supports the object (typically a value or fact statement). This is preferred over 'values' and 'finds important' because it is more generic without loss of meaning.
evaluative link values arvostaa valued by arvostajana A stakeholder (subject) gives value or finds an object important. Object may be a topic or statement. Depreciated, use 'has opinion' instead.
evaluative link has truthlikeness on totuudellinen A subjective probability that subject is true. Object is a numeric value between 0 and 1. Typically this has a qualifier 'according to X' where X is the person or archetype who has assigned the probability.
evaluative link has preference mieltymys preference of mieltymyksenä Subject is better than object in a moral sense.
evaluative link has popularity on suosiossa A measure based on likes given by users.
evaluative link has objective omaa tavoitteen objective of tavoitteena
argumentative link agrees samaa mieltä
argumentative link disagrees eri mieltä
argumentative link comments kommentoi commented by kommentoijana
argumentative link defends puolustaa defended by puolustajana
argumentative link attacks hyökkää attacked by hyökkääjänä
argumentative link relevant argument relevantti argumentti Argument is relevant in its context.
argumentative link irrelevant argument epärelevantti argumentti Argument is irrelevant in its context.
argumentative link joke about vitsi aiheesta provokes joke kirvoittaa vitsin This relation is used to describe that the subject should not be taken as information, even though it may be relevant. Jokes are allowed because they may help in creating new ideas and perspectives to an issue.
referential link topic of aiheena has topic aiheesta This is used when the object is a publication and the subject is a (broad) topic rather than a statement. In such situations, it is not meaningful to back up the subject with references. Useful in describing the contents of a publication, or identifying relevant literature for a topic.
referential link discussed in kerrotaan discusses kertoo
referential link reference for viitteenä has reference viite Subject is a reference that backs up statements presented in the object. Used in the same way as references in scientific literature are used.
referential link states väittää stated in väitetään kohteessa Describes the source of a statement; may also refer to a person.
referential link tag for täginä has tag omistaa tägin Subject is a keyword, type, or class for object. Used in classifications.
referential link category for kategoriana has category kuuluu kategoriaan
referential link associates with liittyy Subject is associated with object in some undefined way. This is a weak relation and does not affect the outcomes of inferences, but it may be useful to remind users that an association exists and it should be clarified more precisely. This is its own inverse.
referential link answers question vastaa kysymykseen has answer vastaus Used between a statement (answer) and a topic (question). In knowledge crystals, the relation is embedded in the object structure.
irrelevant argument irrelevant comment epärelevantti kommentti Inverses are not needed, because the relation is always tied with an argument (the subject).
irrelevant argument irrelevant attack epärelevantti hyökkäys
irrelevant argument irrelevant defense epärelevantti puolustus
relevant argument relevant comment relevantti kommentti
relevant argument relevant attack relevantti hyökkäys
relevant argument relevant defense relevantti puolustus
property link evaluative property arviointiominaisuus characteristic of a product or work that tells whether it is fit for its purpose. Especially used for assessments and assessment work.
evaluative property property of decision support päätöstuen ominaisuus What makes an assessment or decision support process fit for its purpose?
evaluative property setting of assessment arvioinnin kattavuus See Table 5.
setting of assessment impacts vaikutukset
setting of assessment causes syyt
setting of assessment problem owner asianomistaja
setting of assessment target users kohderyhmä
setting of assessment interaction vuorovaikutus
interaction dimension of openness avoimuuden ulottuvuus See Table 6.
dimension of openness scope of participation osallistumisen avoimuus
dimension of openness access to information tiedon avoimuus
dimension of openness timing of openness osallistumisen ajoitus
dimension of openness scope of contribution osallistumisen kattavuus
dimension of openness impact of contribution osallistumisen vaikutus
interaction category of interaction vuorovaikutuksen luokka See Table 2. How does assessment interact with the intended use of its results? Possible values: isolated (eristetty), informing (tiedottava), participatory (osallistava), joint (yhteistyöhakuinen), shared (jaettu).
property of decision support quality of content sisällön laatu See Table 4.
quality of content informativeness tarkkuus
quality of content calibration harhattomuus
quality of content coherence sisäinen yhdenmukaisuus
property of decision support applicability sovellettavuus
applicability relevance merkityksellisyys
applicability availability saatavuus
applicability usability käytettävyys
applicability acceptability hyväksyttävyys
property of decision support efficiency tehokkuus
efficiency intra-assessment efficiency sisäinen tehokkuus
efficiency inter-assessment efficiency ulkoinen tehokkuus

Appendix S4: Workspace tools: OpasnetUtils package and Opasnet Base

Ovariable

Ovariable is an object class that is used in R to operationalise knowledge crystals. In essence, impact assessment models are built using ovariables as the main tool to organise, analyse, and synthesise data and causal relations between items. The purpose of ovariables is to offer a standardised, generalised, and modular solution to modelling. Standardised means that all ovariables have the same overall structure, and this makes it possible to develop generalised functions and processes to manipulate them. Modular structure of a model makes it possible to change pieces within the model without braking the overall structure of functionality. For example, it is possible to take an existing health impact model, replace the ovariable that estimates the exposure of the target population with a new one, and produce results that are otherwise comparable to the previous results but differ based on exposure.

What is the structure of an ovariable such that

  • it complies with the requirements of variable and
  • it is able to implement probabilistic descriptions of multidimensional variables and
  • it is able to implement different scenarios?

An ovariable contains the current best answer in a machine-readable format (including uncertainties when relevant) to the question asked by the respective knowledge crystal. In addition, it contains the information needed to derive the current best answer. The respective knowledge crystal typically has an own page at Opasnet, and the code to produce the ovariable is located on that page under subheading Calculations.

It is useful to clarify terms here. Answer is the overall answer to the question asked (including an evaluated ovariable), and it is the reason for producing the knowledge crystal page in the first place. Answer is typically located near the top of the page to emphasise its importance. An answer may contain text, tables, or graphs on the web page. It typically also contains an R code for evaluating the respective ovariable. Output is the key part (technically a slot) of the answer within an ovariable and contains the details of what the reader wants to know about the answer. All other parts of the ovariable are needed to produce the output or understand its meaning. Finally, Result is the key column of the Output table (technically a data frame) and contains the actual numerical values for the answer.

Slots

The ovariable is a class S4 object defined by OpasnetUtils in R software system. An ovariable has the following separate slots that can be accessed using X@slot (where X is the name of the ovariable):

@name
  • Name of <self> (the ovariable object) is useful since R's S4 classes doesn't support self reference. It is used to identify relevant data structures as well as to set up hooks for modifiers such as scenario adjustments.
@output
  • The current best answer to the question asked.
  • A single data frame (a 2D table type in R)
  • Not defined until <self> is evaluated.
  • Possible types of columns:
    • Result is the column that contains the actual values of the answer to the question of the respective knowledge crystal. There is always a result column, but its name may vary; it is of type <self>Result.
    • Indices are columns that define or restrict the Result in some way. For example, the Result can be given separately for males and females, and this is expressed by an index column Sex, which contains locations Male and Female. So, the Result contains (at least) one row for males and one for females. If there are several indices, the number of rows is typically the product of numbers of locations in each index. Consequently, the output may become very large with several indices.
    • Iter is a special kind of index used in Monte Carlo simulations. Iter contains the number of the iteration. In Monte Carlo, the model is typically run 1000 or 10000 times.
    • Unit contains the unit of the Result. It may be the same for all rows, but it may also vary from one row to another. Unit is not an index.
    • Other, non-index columns can exist. Typically, they are information that were used for some purpose during the evolution of the ovariable, but they may be unimportant in the current ovariable if they have been inherited from parent ovariables. Due to these other columns, the output may sometimes be rather wide.
@data
  • A single data frame that defines <self> as such.
  • data slot contains data about direct measurements or estimates of the output. Typically, when data is used, the output can be directly derived from the information given, with possibly some manipulations such as dropping out unnecessary rows or interpreting given ranges or textual expressions as probability distributions.
  • Probability distributions are interpreted by OpasnetUtils/Interpret.
@marginal
  • A logical vector that indicates full marginal indices (and not parts of joint distributions, result columns, or units or other row-specific descriptions) of output.
@formula
  • A function that defines <self> using objects from dependencies as inputs.
  • Returns either a data frame or an ovariable, which is then used as the output of the ovariable.
  • Formula and dependencies slots are always used together. They estimate the answer indirectly in cases when there is knowledge about how this variable depends on the results of other variables (called parents). The slot dependencies is a table of parent variables and their identifiers, and formula is a function that takes the outputs of those parents, applies the defined code to them, and in this way produces the output for this variable.
@dependencies
  • A data frame that contains names and tokens or identifiers for model runs of variables required for <self> evaluation (list of causal parents). The following columns may be used:
    • Name: name of an ovariable or a constant found in the global environment (.GlobalEnv).
    • Key: the run key (typically a 16-character alphanumeric string) of a model run that is stored to Opasnet server. Key to be used in objects.get() function to fetch the dependent object.
    • Ident: Page identifier and rcode name to be used in objects.latest() function where the newest run contains the dependent object. Syntax: "Op_en6007/answer".
    • Also other columns are allowed (e.g. Description), and they may contain additional information about parents.
  • Dependencies is a way of enabling references in ovariables by using function OpasnetUtils/ComputeDependencies. It creates variables in .GlobalEnv environment so that they are available to expressions in formula.
  • Dependent ovariables are fetched and evaluated (only once by default) upon <self> evaluation.
@ddata
  • A string containing an Opasnet identifier e.g. "Op_en1000". May also contain a subset specification e.g. "Op_en1000/dataset".
  • This identifier is used to download data from the Opasnet database for the data slot (by default, only if empty) upon <self> evaluation.
  • By default, the data defined by ddata is downloaded when an ovariable is created. However, it is also possible to create and save an ovariable in such a way that the data is downloaded only when the ovariable is evaluated.
@meta
  • A list of descriptive information of the object. Typical information include date created, username of the creator, page identifier for the Opasnet page with the ovariable code, and identifier of the model run where the object was created.
  • Other meta information can be added manually.

OpasnetUtils and operations with ovariables

OpasnetUtils is an R package found in CRAN repository (cran.r-project.org). It contains tools for open assessment and modelling at Opasnet, especially for utilising ovariables as modelled representations of knowledge crystals. Typically, ovariables are defined at Opasnet pages, and their data and evaluated output are stored to Opasnet server. There are also special user interface tools to enable user inputs before an R code is run on an Opasnet page; for further instructions, see http://en.opasnet.org/w/R-tools. However, ovariables can be used independently for building modular assessment models without any connection to Opasnet.

The example code shows some of the most important functionalities. Each operation is followed by an explanatory comment after # character.

install.packages("OpasnetUtils") # Install the package OpasnetUtils. This is done only once per computer.

library(OpasnetUtils) # Open the package. This is done once per R session.

objects.latest("Op_en4004", code_name="conc_mehg") # Fetch ovariables stored by code conc_mehg at Opasnet page Mercury concentrations in fish in Finland (with identifier 4004)

conc_mehg <- EvalOutput(conc_mehg) # Evaluate the output of ovariable conc_mehg (methyl mercury concentrations in fish) that was just fetched.

dat <- opbase.data("Op_en4004", subset="Kerty database") # Download data from Kerty database on the same page and put that to data.frame dat

a <- Ovariable("a", data=data.frame(Fish=c("Herring","Salmon"), Result=c(1,3))) # Define ovariable for scaling salmon results with factor 3.

mehg_scaled <- conc_mehg * a # Multiply methyl mercury concentrations by the scaling factor.

An ovariable is well defined when there is enough data, code or links to evaluate the output. Ovariables often have upstream dependencies whose output affect the output of the ovariable at hand. Therefore, ovariables are usually stored in a well defined but unevaluated format (i.e. without output). This makes it possible to use the same ovariable in different contexts, and the output varies depending on the upstream dependencies. On the other hand, it is possible to store all evaluated ovariables of a whole assessment model. This makes it possible to archive all details of a certain model version for future scrutiny.

Ovariables have an efficient index handling, which makes it possible to do arithmetic operations such as sums and products in a very simple way with ovariables. The basic idea is that if the outputs of two ovariables have two columns by the same name, they are automatically merged (or joined, using the SQL vocabulary) so that rows are merged iff they have the same location values in those two columns. The same principle applies to all pairs of columns by the same name. After the merge, the arithmetic operation is performed, row by row, to the Result columns of each ovariable. This results in an intuitive handling of outputs using a short and straightforward code.

Recursion is another important property of ovariables. When an ovariable is evaluated, a code checks whether it has upstream dependencies. If it does, those ovariables are fetched and evaluated first, and recursively the dependencies of those ovariables are fetched also, until all dependencies have been evaluated. Case-specific adjustments can be done to this recursion by fetching some upstream ovariables before the first ovariable is evaluated; if an upstream ovariable exists already in the global environment, the existing object is used and the respective stored object is not fetched (dependencies are only fetched if they do not already exist; this is to avoid unnecessary computation).

Decisions and other upstream commands

The general idea of ovariables is such that their code should not be modified to match a specific model but rather define the knowledge crystal in question as extensively as possible under it's scope. In other words, it should answer its question in a reusable way so that the question and answer would be useful in many different situations. (Of course, this should be kept in mind already when the question is defined.) To match the scope of specific models, ovariables can be modified without changing the ovariable code by supplying commands upstream. A typical decision command is to make a new decision index with two scenarios, "business as usual" and "policy" and use the original ovariable result for business as usual and adjust the result for the policy e.g. by adding or multiplying it by a constant reflecting the impact of the policy on the ovariable. Such adjustments can be done on the assessment level without a need to change the ovariable definition in any way.

Evaluating a latent ovariable triggers first the evaluation of its unevaluated parent ovariables (listed in dependencies) since their results are needed to evaluate the child. This chain of evaluation calls forms a recursion tree in which each upstream variable is evaluated exactly once (cyclical dependencies are not allowed). Decision commands about upstream variables are checked and applied upon their evaluation and then propagated downstream to the first variable being evaluated. For example, decisions in decision analysis can be supplied this way:

  1. pick an endpoint ovariable
  2. make decision variables for any upstream ovariables (this means that you create new scenarios with particular deviations from the actual or business-as-usual answer of that ovariable)
  3. evaluate endpoint ovariable
  4. optimize between options defined in decisions.

Other commands include: collapse of marginal columns by sums, means or sampling to reduce data size; and passing input from model level without redefining a whole ovariable.

Opasnet Base

Opasnet Base is a storage database for all kinds of data needed in open assessments. It may contain parameter values for models, which are typically shown as small tables on knowledge crystal pages, from which they are automatically stored to the database. It may also contain large dataset such as research datasets or population datasets of thousands or even millions of rows, and they are uploaded to the database using an importer interface. Each table has its own structure and may or may not share column names with other tables; however, if a table is directly used as data slot for an ovariable, it must have a Result column.

Technically, Opasnet Base is a noSQL database using MongoDB software. Metadata of the tables is stored in a MySQL database. This structure offers the speed, searchability, and structural flexibility that a large amount of non-standard data requires. The database also offers version control, as old versions of a data table are kept in the database when new data is uploaded.

The database also contains data about model runs that have been performed at Opasnet, if objects were stored during that model run. This makes it possible to fetch objects produced by a particular code on a particular knowledge crystal page. Typically the newest version is fetched, but information about the old versions are kept as well. The objects stored are not located in MongoDB but on server files that can be accessed with a key. It is also possible to save objects in a non-public way so that the key is not stored in the database and is only given to the person who ran the code. Due to disc storage reasons, Opasnet does not guarantee that stored objects will be kept permanently; therefore, it is a good practice to store final assessment runs with all objects to another location for permanent archival.

There are several ways to access database content.

For further instructions, see http://en.opasnet.org/w/Opasnet_Base_UI for user interface and http://en.opasnet.org/w/Table2Base for the wiki interface of small tables.

Appendix S5: Tools to help in shared understanding

There are lots of software and platforms to support decision making. Some of them have been listed here. The focus is on open source software solutions when available. Many examples come from Finland, as we have practical experience about them. The list aims to cover different functionalities and show examples rather than give an exhaustive list of all possibilities; such lists may be found from Wikipedia, e.g. https://en.wikipedia.org/wiki/Comparison_of_project_management_software. All links were accessed 1 Feb 2020.

Table S5-1. Useful functionalities and software in open policy practice.
Item Functionality or process phase Tool or software
Decision process Information-based decision support There is no single tool covering the whole decision process. Development work is needed. An interesting pilot software is being developed by the city of Helsinki for comprehensively managing and evaluating their ambitious Climate Watch and its impacts.
Initiative Several websites for launching, editing, and signing citizen initiatives at municipality or national level: Kansalaisaloite (Citizen Initiative), Nuortenideat (Ideas of the Young), Kuntalaisaloite (Municipality Initiatives). Similar tools could be used also for initiatives launched by Members of Parliament or the Government.
Substance Content management Diary systems, file and content management systems. Lots of individual solutions, mostly proprietary. VAHVA project by the Finnish Government will provide knowledge and tools for content management.
Research data and analyses AVAA, IDA, Fairdata and other data management tools help in managing research data from an original study to archival. Avoin data (open data in Finland), platform for publishing open data. Findicator: indicators from all sectors of the society. Datahub for open data sharing. Tools for separate analysis tasks are numerous, e.g. QGIS for geographical data. Several research fields have their own research and article databases, such as ArXiv.org (articles about physics, mathematics and other fields). Several biological databases.
Public discussion, argumentation, statements Otakantaa, Facebook, Twitter, blogs, and other social media forums for discussion. Websites for fact checking: Factbar, Fullfact, Need to know project for fact checking. Agoravoting is an open voting system. Lausuntopalvelu collects statements from the public and organisations related to planned legislation and Government programs in Finland. Swarm AI for collective intelligence
News News feeds (open source) CommaFeed, Tiny Tiny RSS. Semantic, automated information searches, e.g. Leiki.
Description and assessment of decision situations and relevant causal connections Opasnet for performing Open assessments and impact assessments. Knowledge crystals as integral parts of models and assessments. Simantics System Dynamics in semantic models. Jupyter notebooks for collaborative model development. Wikidata, Wikipedia as storages of structured data and information.
Laws and regulations Semantic Finlex contains the whole Finnish legislation and e.g. the decisions by the Supreme Court in a semantic structure.
Methods Preparation of documents, co-creation, real-time co-editing Several co-editing tools, e.g. Hackpad, MS Office365, Google Docs, Etherpad, Dropbox Paper, MediaWiki and Git. These tools enable the opening of the planning and writing phase of a decision. E.g. the climate strategy of Helsinki was co-created online with Google Docs and Sheets in 2018.
Development and spreading good practices InnoVillage helps to develop practices faster, when everyone's guidance is available online and can be commented.
Organising systems for information and discussions Decentralised social networking protocol Activitypub

Tools: Full Fact automated fact checking Compendium. Vocabularies and semantic tools: Resourse Description Framework (RDF), Finto (Finnish Thesaurus and Ontology Service), AIF-RDF Ontology using Conceptual Graphics User Interface COGUI. These act as a basis for organising, condensing and spreading knowledge.

Information design, visualisations Interactive and static visualisations from complex data. Shiny, Diagrammer, Gapminder, Lucify Plotly Cytoscape
Work Work processes in decision making, research etc: follow-up, documentation Ahjo decision repository and Openahjo interface document and retrieve decisions that have been done in the city of Helsinki. Git enables reporting of both research and decision processes. There are several new platforms for improving science, such as Open Science Framework for facilitating open collaboration in research. Omidyar Network is a philantropic investment firm supporting e.g. governance and citizen engagement.
Co-creation, experiments, crowdsourcing Kokeilun paikka promotes experiments when applicable information is needed but not available. Sociocracy 3.0 provides learning material and principles for open collaboration in organisations of any size.
Project management There are lots of project management software, mainly targeted for enterprise use but somewhat applicable in decision making or research. Some examples: OpenProject, Project Management Body of Knowledge, Comparison of project management software, Fingertip.
Stakeholders Expert services ResearchGate Solved and other expert networks.

See also

Parts not used

  • Political extremism is supported by an illusion of understanding[1]
  • en:Political polarization
  • U.S. media polarization and the 2020 election: a nation divided [11].
  • WIRED: Psychological microtargeting could actually save politics [12]
  • PLOS blogs: Future of open series [13]

None of the websites and tools described in this article offer a complete environment for open topic-wise scientific information production and discussion that would also support decision making. Opasnet works well for online assessments, but it is not optimised for documenting policy discussions or scientific work in real time. Climate Watch was designed to implement open policy practice in a specific context of municipality climate action plans. There are plans to generalise the functionalities for wider use base. This could be achieved by merging the functionalities of e.g. Opasnet, Open Science Framework, open data repositories, and discussion forums. Even if different tasks would happen at separate websites, they could form an integrated system (by using e.g. standard interfaces and permanent resource locations) to be used by decision makers, experts, stakeholders, and machines. Resource description framework and ontologies could be helpful in organising such a complex system.

Boundary object is a concept for managing information work within a heterogeneous group of participants[2]. As people come from different disciplines, they see things differently and use different words to describe things. Boundary objects are common words or concepts that are similar enough across disciplines so that they help understanding but allow specific interpretations within disciplines or by individuals. Several dioxin-related knowledge crystals were successfully used as boundary objects in BONUS GOHERR project (Table S1-1) to produce shared understanding among authorities, fishers, and researchers from public health, marine biology, and social sciences.[3]

Shared understanding aims to bring different views together. This is something that is needed especially during this time of polarisation[4]. The open assessments performed have identified more agreements even about heated topics that what seems to be case based on social media. The pneumococcus case is an example of this.

Presenting also controversial and unpopular ideas is a prerequisite for a complete shared understanding. Thus, a community producing shared understanding should cherish and respect such activity and promote contributions even if they are incompatible with scientific or another paradigms. This is challenging to both someone presenting such claims and someone else personally against the presented idea. It helps if all parties have faith in the process and its capability to produce fair conclusions[5]. Therefore, the society should promote the acceptability of open decision processes, open participation, and diverse contributions. Such attitude prevails in the climate strategy of Helsinki, but it was present already five years earlier in Transport and communication strategy in digital Finland (Table S1-1).

Openness does not mean that any kind of organisation or individual is equally prone or capable of using assessment information. Such equity issues are considered as a separate question and are not dealt with in this generic examination.

Openness is crucial because a priori it is impossible to know who may have important factual information or value judgements about the topic.

Open platforms for deliberation of decisions are available (otakantaa.fi, kansalaisaloite.fi), and sharing of code is routinely done via large platforms (ubuntu.com, cran.r-project.org). Also generic online tools such as Google Drive (drive.google.com), Slack (slack.com), and others have familiarised people with online collaboration and idea that information is accessible from anywhere.

ArXiv.org is a famous example of preprint servers offering a place for publishing and discussing manuscripts before peer review[6]. Such websites, as well as open access journals, have increased during recent years as the importance of availability of scientific information has been understood. Using open data storages (ida.fairdata.fi) for research results are often required by research funders.

Aumann's agreement theorem shows that rational Bayesians demonstrates that rational agents with common knowledge of each other's beliefs cannot agree to disagree, because they necessarily end up updating their posterior with that of the other[7]. In this thinking, shared understanding can be seen as an intermediate phase where the disagreements have been identified but the posteriors have not yet been updated to reflect the data that is possessed by the other person.

Acquiescence, i.e. situations where people know that their choice is irrational but they choose it anyway[8]

There is a new political movement (Liike Nyt https://liikenyt.fi/) in Finland that claims that their member of parliament will vote whatever a public online discussion concludes. This approach is potentially close to co-created policy recommendations based on shared understanding. However, they are - at least so far - not using novel information tools or concepts to synthesise public discussions. Instead, they use social media groups and online polls. VIIDEN TÄHDEN LIIKE?

Also, we hypothesise that only a few major paradigms will emerge, and those are ones whose applicability is wide and independent of the discipline. Scientific paradigm is expected to be one of them, and it will be interesting to see what else emerges. People commonly reason against some unintuitive rules of the scientific method (e.g. they try to prove a hypothesis right rather than wrong) but it is not clear whether this will cause a need to develop a paradigm for an alternative approach. It is even not clear whether people are willing to accept the idea that there could be different, competing rules for reasoning in a single assessment or decision process.

Indeed, only 7 % of people contributing to Wikipedia do it for professional reasons[9].

Omidyar Network is an organisation that gives grants to non-profit organisations and also invest in startups that promote e.g. governance and citizen engagement[10]. As an example, they support tools to improve discussion online with annotations[11], an objective similar to with structured discussions.

Additional references to pragma-dialectics[12].

Some experts and politicians seem to see criticism as a threat that should be pre-emptively avoided by only publishing finalised products. In contrast, agile processes publish their draft products as soon as possible and use criticism as a source of useful and relevant information.

Open Science Framework is a project that aims to increase reproducibility in science by developing structured protocols for reproducing research studies, documenting study designs and results online, and producing open source software and preprint services to support this[13]. The Framework maintains a web-workspace for documenting research as it unfolds rather than only afterwards in articles.

Our own experience is the same, and we have not seen hijacking, malevolent behaviour or low-quality junk contributions. However, some robots produce unrelated advertisement material at Opasnet pages, but that is easy to identify and remove, and it has not become a problem.

TO DO

  • A specific link should be available to mean the object itself rather than its description. http://en.opasnet.org/entity/...?
  • All terms and principles should be described at Opasnet at their own pages. Use italics to refer to these pages.
  • Upload Tuomisto 1999 thesis to Julkari. And Paakkila 1999

Suggested editors: Daniel Angus, Özlem Uzuner, Sergio Villamayor Tomás, Frédéric Mertens

  1. Philip M. Fernbach, Todd Rogers, Craig R. Fox, Steven A. Sloman. (2013) Political Extremism Is Supported by an Illusion of Understanding. Psychological Science Volume: 24 issue: 6, page(s): 939-946. https://doi.org/10.1177/0956797612464058
  2. Star SL, Griesemer JR. Institutional Ecology, 'Translations' and Boundary Objects: Amateurs and Professionals in Berkeley's Museum of Vertebrate Zoology, 1907-39. Social Studies of Science, 1989; 19 387-420.
  3. GOHERR VIITE työpajapaperi##
  4. Pew Research Center. (2020). U.S. Media Polarization and the 2020 Election: A Nation Divided. https://www.journalism.org/2020/01/24/u-s-media-polarization-and-the-2020-election-a-nation-divided/
  5. Rodriguez‐Sanchez C, Schuitema G, Claudy M, Sancho‐Esper F. (2018) How trust and emotions influence policy acceptance: The case of the Irish water charges. British Journal of Social Psychology 57: 3: 610-629. https://doi.org/10.1111/bjso.12242
  6. Cornell University Library. arXiv.org. https://arxiv.org/. Accessed 1 Feb 2020.
  7. Aumann RJ. (1976) Agreeing to Disagree. The Annals of Statistics. 4 (6): 1236–1239. doi:10.1214/aos/1176343654.
  8. Walco DK, Risen, JL. The Empirical Case for Acquiescing to Intuition. PSYCHOLOGICAL SCIENCE 2017;28:12:1807-1820. doi:10.1177/0956797617723377
  9. Pande M. Wikipedia editors do it for fun: First results of our 2011 editor survey. 2011. https://blog.wikimedia.org/2011/06/10/wikipedia-editors-do-it-for-fun-first-results-of-our-2011-editor-survey/. Accessed 1 Feb 2020.
  10. Omidyar Network. A world of positive returns. http://www.omidyar.com. Accessed 1 Feb 2020.
  11. Hypothesis. Annotate the web, with anyone, anywhere. https://web.hypothes.is/. Accessed 1 Feb 2020.
  12. Eemeren FH van. Reasonableness and effectiveness in argumentative discourse. Fifty contributions to the development of pragma-dialectics. Springer International Publishing Switzerland, 2015. ISBN 978-3-319-20954-8. doi:10.1007/978-3-319-20955-5
  13. Open Science Framework. https://osf.io/. Accessed 1 Feb 2020.