Opasnet base structure

From Opasnet
Revision as of 07:19, 1 October 2008 by Juha Villman (talk | contribs) (added few tables to result database)
Jump to navigation Jump to search

Result database is a storage and retrieval system for variable results. It is basically an SQL database with the following functionalities:

  1. Storage of results of variables with uncertainties when necessary, and as multidimensional arrays when necessary.D↷
  2. Automatic retrieval of results when called from the collaborative workspace or other platforms.
  3. Description and handling of the dimensions that the variables may take.
  4. Storage and retrieval system for items that are needed to calculate the results of variables.
  5. A platform for performing computer runs to update the results of variables.
  6. Follow-up of the linkages between variables, the data about a particular variable, and the computing formula of the variable, in respect to their impact on the variable result.
  7. Follow-up of the age and validity of the content based on the previous point.
  8. A platform for planning computer runs based on the update need, CPU demand, and CPU availability.

Functionalities of the result database

Storage and retrieval of results of variables

The most important functionality is to store and retrieve the results of variables. Because variables may take very different forms (from a single value such as natural constant to an uncertain spatio-temporal concentration field over the whole Europe), the database must be very flexible. The basic solution is described in the variable page, and it is only briefly summarised here. The result is described as

  P(R|x1,x2,...) 

where P(R) is the probability distribution of the result and x1 and x2 are defining locations where a particular P(R) applies. A dimension means a property along which there are multiple locations and the result of the variable may have different values when the location changes. In this case, x1 and x2 are dimensions, and particular values of x1 and x2 are locations. A variable can have zero, one, or more dimensions. Even if a dimension is continuous, it is usually operationalised in practice as a list of discrete locations. Such a list is called an index, and each location is called a row of the index. In the general information structure of the new risk assessment method, dimensions are Classes with a special purpose. An index can be thought of as a variable that inherits its plausible range from a dimension (class).

Uncertainty about the true value of the variable is operationalised as a random sample from the probability distribution, in such a way that the samples are located along an index Sample, which is a list of integers 1,2,3...n, where n=number of samples.

The dimensions of a variable are determined by the parent variables (by inheritance) and the formula used to calculate the result. Thus, there is not a place where the dimensions of a particular variable are explicitly asked for. In addition, the indices (as operationalisations of dimensions) are NOT properties of variables but of risk assessments. This may sound unintuitive, but the reasoning is that indices are just practical ways to handle dimensions, and these practical needs may change from one assessment to another.

The tables Variable and Result contain the result data. In addition, Location, Dimension, Index, and Rows contain data about the dimensions and indices used. These tables together offer the functionalities of data storage and retrieval, and handling of multidimensionality and uncertainty.

Calculation of the updated results

The result of a variable can be calculated when four things are known:

  1. The list of of upstream variables (parents) (Definition/causality attribute),
  2. The results of the parent variables,
  3. The data used to derive the result (Definition/data attribute), and
  4. The formula used to calculate the result based on the items above (Definition/formula attribute).

The three sub-attributes of the Definition are represented by three tables in the result database: Causality, Formula, and Data. In addition, the results of the parents can be obtained from the Result table. The variable transfer protocol is used to extract these data from the result database, send them to an external software such as R to calculate the result, and store the calculated result into the Result table of the database. The technical solutions to do this in practice have to be developed.

When a variable result is calculated, the computing software must know, which indices must be used with which variables. This can be automatically resolved using the following reasoning algorithm.

  • Make a list of all unfinished risk assessments.
  • Make a list of all indices in these risk assessments.
  • Compile all indices of a particular dimension into one large "super-index" with all the locations.
  • Use these "super-indices" in the calculations.
  • Apply a particular "super-index" for a particular variable, if that variable has the dimension in question.

A wild use of occasional indices is discouraged, because they cause heavy computing needs with little benefit. Therefore, there should be a "standard risk assessment" that is constantly kept unfinished. It would then contain recommended indices for all major dimensions. This way, at least the standard indices are always used in computations, and the need for users to develop their own indices is smaller.

When the new results are stored in the database, the old results of the variables are deleted. The different versions of the variable results are NOT permanently stored anywhere. However, when a risk assessment report is created using the reporting tool, the result distributions used for that report are stored, together with the definitions and other data about all variables. Thus, a full copy of everything that relates to a particular assessment can be downloaded and stored outside the result database.

Follow-up of validity

The result of a variable is valid since its update until something that affects its content (i.e., the four things listed above) changes. Therefore, there must be a system that follows what things these are for a particular variable, and whether they have changed since the last calculation of the variable result. When the data in Causality, Formula, and Data tables is combined with the data of the dates when the parent variables were run, it can be automatically concluded whether the variable is valid or not. If the variable is older than its determinants, there is a need to recalculate the result. This cannot be done fully automatically, because some variables are probably being actively edited, and this would create a constant need to update everything downstream. In addition, some complex variables may take even weeks to compute.

Therefore, there should be a planning system for result updates. This can easily be done by adding tables Run and Run_list to the database. These tables contain information about the runs that have been performed or are being planned to be performed. The user can add variables to and delete them from the lists of planned runs. The needs for updating can be combined into practical collections of variables, given their connections, computer time needed, and computer time available. Then, when the task has been defined and the resources are available, a computer run can automatically be performed.

Suggested techniques to get started

MySQL:

The current idea is to describe the variables in Mediawiki, which is a text-based software. It would therefore become very difficult to operate these functionalities from there. Instead, if we store the data into the most convenient way, it can be effectively utilised. The most convenient way is to use an SQL database, which is the standard for large databanks. Among all SQL software, MySQL is the best due to several reasons:

  • It is freely available open access software.
  • It is easy to use.
  • It has powerful functionalities.


To make this work out, we need a variable transfer protocol so that the result of a variable can be retrieved either automatically by a calculating software, or manually by the user who wants to explore the result. Fancy presenting software can be built on top of the database, so that the user does not see huge lists of numbers, but nice distributions instead. The development of this software is, again, technically straightforward, because:

  • It is only communicating with the MySQL database, except some launch codes must be placed in other parts of the toolbox. Thus, the development can easily be decentralized.
  • Something applicable probably exists in the open code world.
  • It is not needed in the early life stages of the toolbox.


A suggested table and column structure for the database

Variable

FIELD TYPE EXTRA
Var_id mediumint(8) primary
Var_name varchar(20) unique
Var_title varchar(100)
Var_scope varchar(1000)
Var_unit varchar(16)
Page_id mediumint(8)
Wiki_id tinyint(3)

Result

FIELD TYPE EXTRA
Result_id int(10) primary
Var_id mediumint(8)
Result varchar(1000)
Sample smallint(5)

Location

FIELD TYPE EXTRA
Loc_id mediumint(8) primary
Dim_id mediumint(8)
Location varchar(1000)

Dimension

FIELD TYPE EXTRA
Dim_id mediumint(8) primary
Dim_name varchar(100)
Dim_title varchar(100)
Dim_unit varchar(16)
Page_id mediumint(8)
Wiki_id tinyint(3)

Index

FIELD TYPE EXTRA
Ind_id int(10) primary
Ind_name varchar(100)
Dim_id mediumint(8)

Rows

FIELD TYPE EXTRA
Ind_id int(10) unique
Row_number int(10) unique
Loc_id mediumint(8)

Loc_of_result

FIELD TYPE EXTRA
Loc_id mediumint(8) unique
Result_id int(10)
Var_id mediumint(8)
Ind_id mediumint(8)
N mediumint(8)



Result database is for storing variable results in a way that they can be used independently of the assessment they were created in.

Possible uses of the database

Making value-of-information analyses

Value of information (VOI) is a decision analysis tool for estimating the importance of remaining uncertainty for decision-making. For detailed description, see Value of information. Result database can be used to perform a large number of VOI analyses, because all variables are in the right format for that: as random samples from uncertain variables. The analysis is done by optimising an indicator by adjusting a decision variable so that the variable under analysis is conditionalised to different values. All this can in theory be done in the result database by just listing the indicator, the decision variable, and the variable of interest. Practical tools should be developed for this. After that, systematic VOI analyses can be made over a wide range of environmental health issues.

The improvement of the quality of a variable in time

All results that have once been stored in the result database remain there. Although the old results are not interesting for environmental health assessments after the updated result has been stored, they can be very interesting for other purposes. Some potential uses are listed below:

  • The informativeness and calibration (see performance) can be evaluated for a single variable in time against the newest information.
  • Critical pieces of information that had made major contribution to the informativeness and calibration can be identified afterwards.
  • Large number of variables can be assessed and e.g. following questions can be asked:
    • How much work is needed to make a variable to have reasonable performance for practical applications?
    • What are the critical steps after which the variable performance is saturated, i.e., does not improve much despite additional effort?


Some useful syntax

http://www.baycongroup.com/sql_join.htm


Useful queries that are not (yet) part of a model of procedure

List all dimensions that have indices, and the indices concatenated:

Select Dim_name, dim_title, dim_unit, Group_concat(Ind_name order by ind_name separator ', ') as Indices 
from Dimension, `Index`
where Dimension.dim_id = `Index`.Dim_id
group by Dim_name
order by Dimension.dim_id


List all indices, and their locations concatenated:

Select Dim_name, Dim_title, Dim_unit, Ind_name, Group_concat(Location order by row_number separator ', ') as Locations 
from `Index`, Location, Rows, Dimension
where `Index`.ind_id= Rows.ind_id and Rows.loc_id = Location.loc_id and `Index`.dim_id = Dimension.dim_id
group by Ind_name
order by Dim_name, `Index`.ind_name


List all variables and their runs, and also list all dimensions (concatenated) used for each variable for each run.

SELECT Var_id, Run_id, Var_name, Var_title, GROUP_CONCAT(Dim_name SEPARATOR ', ') as Dimensions, n, Run_method
FROM
   (SELECT Loc_of_result.Var_id, Run_list.Run_id, Var_name, Var_title, Dim_name, n, Run_method
   FROM Loc_of_result, Run_list, Run, Variable, Location, Dimension
   WHERE Loc_of_result.Result_id = Run_list.Result_id 
   AND Run_list.Run_id = Run.Run_id
   AND Loc_of_result.Var_id = Variable.Var_id
   AND Loc_of_result.Loc_id = Location.Loc_id 
   AND Location.Dim_id = Dimension.Dim_id
   GROUP BY Dimension.Dim_id, Loc_of_result.Var_id, Run_list.Run_id
   ORDER BY Loc_of_result.Var_id, Run_list.Run_id) as temp1
GROUP BY Var_id, Run_id


Other queries

This query was used to transform the Var_id data from the table Result to Loc_of_result. This was a one-time operation that is recorded for historical interest only.

UPDATE Loc_of_result, 
  (SELECT Variable.Var_id, Var_name, Loc_of_result.Loc_id, Loc_of_result.Result_id
   FROM Variable, Loc_of_result, Result
   WHERE Variable.Var_id = Result.Var_id and
   Loc_of_result.Result_id = Result.Result_id
   GROUP BY Loc_of_result.Result_id, Loc_of_result.Loc_id) as temp1
SET Loc_of_result.Var_id = temp1.Var_id
WHERE Loc_of_result.Loc_id = temp1.Loc_id and
   Loc_of_result.Result_id = temp1.Result_id