Reusability of information objects: Difference between revisions

From Opasnet
Jump to navigation Jump to search
(Added category: 'THL publications 2009')
(Added category: 'THL publications 2010')
Line 52: Line 52:
*[[:en:Modular design | Modular design]] (in Wikipedia)
*[[:en:Modular design | Modular design]] (in Wikipedia)
[[Category:THL publications 2009]]
[[Category:THL publications 2009]]
[[Category:THL publications 2010]]

Revision as of 14:01, 28 January 2011


Re-usability of information objects is an idea about formalizing and unifying the structure of the descriptions produced in assessments so that the outputs of assessments become re-usable and applicable in other assessments also outside the original context of their development. The idea has similarities with the principles of modular design applied in e.g. engineering and software programming. It also builds on the principles of mass collaboration by encouraging assessors to share their products with others in order to gain the mutual benefits of collective learning.

Science is inherently an area that is based on learning from previous experience, and this is not a new concept. However, re-usability of information objects goes beyond that. It is an idea based on intentionally producing generally applicable pieces of information that can also be re-used later in similar or related knowledge work. In addition, re-usability of information objects builds on the principle of mass collaboration, which implies that the pieces of information that are produced are set commonly available for later use by oneself or others. What is new in this concept is that there are specific procedures for extracting pieces of information from previous work, and making them centrally available for others and that the information structures are designed for general use. This is done in order to enhance the efficiency and effectiveness of the work.

There are some common assessment practices of making use of the available outputs of previous assessments, but the idea of making use of previously produced information is by no means restricted to assessment. the general pattern described below could be expanded to cover about all types of knowledge intensive work. The common practices can be categorized as:

  • Using general assessment templates
  • Using specific templates created for certain assessment types
  • Using old assessments as examples

The two first bullets are forms of generalizations based on the common knowledge base on the issue, e.g. building on the full-chain approach to integrated environmental health impact assessment (IEHIA) covering the steps from sources to impacts. The specific templates would then be some more detailed general frameworks for certain common types of assessments, e.g. template for air pollution related assessments or template for food contaminant related assessments. Both of these types primarily serve the purpose of giving rough guidelines as assistance for getting started and taking into account the most important generally needed aspects in relation to a particular assessment. The last bullet is of more case-specific type. Previous available assessments addressing similar issues or parts of them can be picked and used as examples for a particular assessment at hand.

All of the three practices of re-using previously produced information described above are somewhat indirect, implicit and subjective in nature. Pieces of information are extracted and adapted to match the current case and the overall result of conducting assessments applying these approaches is a collection of individual, separate assessments. This, of course, is flexible and does not require much formality, but it is also not necessarily very efficient, because it is possible to do basically the same work of describing about the same phenomena over and over again in separate assessments. It also allows inconcistencies between different assessments which then may result e.g. in decrease of applicability of the assessments.

In addition to the three practices described above there is a fourth, a more direct, but challenging, approach to making use of previous assessments that should be considered. In the context of assessment, re-usability of information objects means making direct use of the pieces of information produced in previous assessments. If something that is needed in a particular assessment has been already produced in a previous assessment and is available, it can, and in fact should, be used as such. The piece of information can be developed further, but it may only be changed so that it stays coherent with all the descriptions where it has been used previously or is used simultaneously.

In other words, in the practice of re-usability of information objects chunks of information are handled as independent objects that are used and developed in assessments, but are not owned by any particular assessment, nor contributor. This leads to a situation where the assessments together form a wider overall description of reality than any particular assessment alone. This wider description also serves the purpose of being a cumulating source of information providing the basis for collective learning. It could also be said that it is the overall description of reality that learns as the information from different individual assessments cumulates.

The rationale behind this kind of approach to making use of previous work is to improve both the efficiency and effectiveness of assessments. It should be clear to see how the required efforts can be reduced by using readily existing pieces as parts of an assessment instead of doing everything from scratch (even if there were good examples to follow). Also, when the chunks of information are developed through use of multiple use in several individual assessments, it is reasonable to expect that it results in increase of quality of the information overall.

This is definitely a more universal approach than the common practices, since the chunks of information that are produced need to be not only coherent within a particular assessment, but also within the wider overall description, which could be called the space of all assessments. This is possible, at least in theory, if all the pieces of information are descriptions of parts of reality; incoherence actually means that some variables are not describing reality and they should thus be changed. This can sometimes be hard work, but the outcome, a coherent description of reality, is worth the effort.

Let us call the basic building block of assessment a variable and let us assume that everything can be described as variables. From the point of view of re-usability of information objects individual variables and groups of variables can basically be handled in the same way, so what is said in the text below about variables can be expanded to cover also groups of variables. (Note that the requirement about uniformity of assessment structure here refers to the description, the assessment product, not the process of making the assessment.) In discussions within Intarese, these general building blocks of assessment have been also referred to as e.g. templates, objects, or general variables, but a common term has not yet been established.

It was suggested above that the key to efficient collective learning lies in the ability to make use of the information in the previous assessment directly, i.e. to be able to re-use the products of previously made work as such. Therefore, it is very useful, if not necessary, that the pieces of information, i.e. variables, are uniformly defined. This requires that the structure of a variable is fixed and generally applicable. In addition, the scoping of each variable, i.e. defining the boundaries of what the variable describes, should be made keeping in mind that the variable might be also used outside its original context. In principle every variable could and should be defined and developed in this way.

This requires also means to control the hierarchical relations between variables. A particular variable relevant in an assessment may be, and most often is, a locally relevant variation of a more general variable. An example could be e.g. the relation between a universal variable describing fine particulate matter concentration in air and a case-specific variable called average fine particulate matter concentration in Kuopio in March 2007. Basically the case-specific variable shares a big part of its important information with the universal variable, in other words inherits these characteristics from the universal variable, but the case-specific variable is spatio-temporally more narrowly defined and is limited to considering the average value. Making use of the outputs of previously made assessments may thus also take place in the form of either deducing locally relevant variables from more universal variables or inducing more universal variables from case-specific variables.

In practice, a particular assessment may contain some very specific issues that are unlikely to be used elsewhere. Maintaining their coherence with the other assessments takes resources but is unlikely to produce much benefit. It might be reasonable to distinguish between primary and secondary importance variables; only the variables of primary importance need to be kept coherent within the space of all assessments, while the secondary variables need to be coherent only within the particular assessment where they are used. If someone challenges the coherence of a primary variable outside the assessment, this must be defended and if necessary, the variable description needs to be altered to make it generally coherent. This would not be necessary in case of a secondary variable.

In a similar manner as applying the principles of mass collaboration in assessment, also re-usability of information objects requires technical facilitation to become feasibly applicable. The tools of assessment need to be designed and used in a way that the re-usability of information objects can take place. This requirement adds at least a few points to the critical features of collaborative workspace, where the assessment work is carried out in practice. These are:

  • Ability to store and present assessments
  • Ability to manipulate the information within assessments as individual variable
  • Ability to control the hierarchical relations between variables
    • Categorization of variables
    • Inheritance of characteristics from variables from higher levels of hierarchy

References


See also