Opasnet base structure

From Opasnet
Jump to navigation Jump to search

Result database is a storage and retrieval system for variable results. It is basically an SQL database with the following functionalities:

  1. Storage of results of variables with uncertainties when necessary, and as multidimensional arrays when necessary.D↷
  2. Automatic retrieval of results when called from the collaborative workspace or other platforms.
  3. Description and handling of the dimensions that the variables may take.
  4. Storage and retrieval system for items that are needed to calculate the results of variables.
  5. A platform for performing computer runs to update the results of variables.
  6. Follow-up of the linkages between variables, the data about a particular variable, and the computing formula of the variable, in respect to their impact on the variable result.
  7. Follow-up of the age and validity of the content based on the previous point.
  8. A platform for planning computer runs based on the update need, CPU demand, and CPU availability.

Functionalities of the result database

Storage and retrieval of results of variables

The most important functionality is to store and retrieve the results of variables. Because variables may take very different forms (from a single value such as natural constant to an uncertain spatio-temporal concentration field over the whole Europe), the database must be very flexible. The basic solution is described in the variable page, and it is only briefly summarised here. The result is described as

  P(R|x1,x2,...) 

where P(R) is the probability distribution of the result and x1 and x2 are defining locations where a particular P(R) applies. A dimension means a property along which there are multiple locations and the result of the variable may have different values when the location changes. In this case, x1 and x2 are dimensions, and particular values of x1 and x2 are locations. A variable can have zero, one, or more dimensions. Even if a dimension is continuous, it is usually operationalised in practice as a list of discrete locations. Such a list is called an index, and each location is called a row of the index. In the general information structure of the new risk assessment method, dimensions are Classes with a special purpose. An index can be thought of as a variable that inherits its plausible range from a dimension (class).

Uncertainty about the true value of the variable is operationalised as a random sample from the probability distribution, in such a way that the samples are located along an index Sample, which is a list of integers 1,2,3...n, where n=number of samples.

The dimensions of a variable are determined by the parent variables (by inheritance) and the formula used to calculate the result. Thus, there is not a place where the dimensions of a particular variable are explicitly asked for. In addition, the indices (as operationalisations of dimensions) are NOT properties of variables but of risk assessments. This may sound unintuitive, but the reasoning is that indices are just practical ways to handle dimensions, and these practical needs may change from one assessment to another.

The tables Variable and Result contain the result data. In addition, Location, Dimension, Index, and Rows contain data about the dimensions and indices used. These tables together offer the functionalities of data storage and retrieval, and handling of multidimensionality and uncertainty.

Calculation of the updated results

The result of a variable can be calculated when four things are known:

  1. The list of of upstream variables (parents) (Definition/causality attribute),
  2. The results of the parent variables,
  3. The data used to derive the result (Definition/data attribute), and
  4. The formula used to calculate the result based on the items above (Definition/formula attribute).

The three sub-attributes of the Definition are represented by three tables in the result database: Causality, Formula, and Data. In addition, the results of the parents can be obtained from the Result table. The variable transfer protocol is used to extract these data from the result database, send them to an external software such as R to calculate the result, and store the calculated result into the Result table of the database. The technical solutions to do this in practice have to be developed.

When a variable result is calculated, the computing software must know, which indices must be used with which variables. This can be automatically resolved using the following reasoning algorithm.

  • Make a list of all unfinished risk assessments.
  • Make a list of all indices in these risk assessments.
  • Compile all indices of a particular dimension into one large "super-index" with all the locations.
  • Use these "super-indices" in the calculations.
  • Apply a particular "super-index" for a particular variable, if that variable has the dimension in question.

A wild use of occasional indices is discouraged, because they cause heavy computing needs with little benefit. Therefore, there should be a "standard risk assessment" that is constantly kept unfinished. It would then contain recommended indices for all major dimensions. This way, at least the standard indices are always used in computations, and the need for users to develop their own indices is smaller.

When the new results are stored in the database, the old results of the variables are deleted. The different versions of the variable results are NOT permanently stored anywhere. However, when a risk assessment report is created using the reporting tool, the result distributions used for that report are stored, together with the definitions and other data about all variables. Thus, a full copy of everything that relates to a particular assessment can be downloaded and stored outside the result database.

Follow-up of validity

The result of a variable is valid since its update until something that affects its content (i.e., the four things listed above) changes. Therefore, there must be a system that follows what things these are for a particular variable, and whether they have changed since the last calculation of the variable result. When the data in Causality, Formula, and Data tables is combined with the data of the dates when the parent variables were run, it can be automatically concluded whether the variable is valid or not. If the variable is older than its determinants, there is a need to recalculate the result. This cannot be done fully automatically, because some variables are probably being actively edited, and this would create a constant need to update everything downstream. In addition, some complex variables may take even weeks to compute.

Therefore, there should be a planning system for result updates. This can easily be done by adding tables Run and Run_list to the database. These tables contain information about the runs that have been performed or are being planned to be performed. The user can add variables to and delete them from the lists of planned runs. The needs for updating can be combined into practical collections of variables, given their connections, computer time needed, and computer time available. Then, when the task has been defined and the resources are available, a computer run can automatically be performed.

Suggested techniques to get started

MySQL:

The current idea is to describe the variables in Mediawiki, which is a text-based software. It would therefore become very difficult to operate these functionalities from there. Instead, if we store the data into the most convenient way, it can be effectively utilised. The most convenient way is to use an SQL database, which is the standard for large databanks. Among all SQL software, MySQL is the best due to several reasons:

  • It is freely available open access software.
  • It is easy to use.
  • It has powerful functionalities.


To make this work out, we need a variable transfer protocol so that the result of a variable can be retrieved either automatically by a calculating software, or manually by the user who wants to explore the result. Fancy presenting software can be built on top of the database, so that the user does not see huge lists of numbers, but nice distributions instead. The development of this software is, again, technically straightforward, because:

  • It is only communicating with the MySQL database, except some launch codes must be placed in other parts of the toolbox. Thus, the development can easily be decentralized.
  • Something applicable probably exists in the open code world.
  • It is not needed in the early life stages of the toolbox.


A suggested table and column structure for the database

Variable
Information about variable attributes and validity
Columns
  • Var_id* (identifier of the variable)
  • Var_name (variable name as in Mediawiki)
  • Var_scope (variable scope as in Mediawiki)
  • Var_unit (variable unit as in Mediawiki)
  • Validity_date (the date when the variable last was valid [1.1.2100 if it is currently valid])
  • Run_id (the run that produced the current results of the variable)
  • Run_time (CPU time that was needed for this variable during the last run)
Result
All results are stored in this table. Each value of a result of a variable has an own row.
Columns
  • Result_id* (identifier of the row in this table)
  • Var_id (identifier of the variable)
  • Result (the actual value of the variable)
  • Sample (the row in the uncertainty index Sample. Use 0, if the result is deterministic.)
Location
The location of the result along a particular dimension.
Columns
  • Result_id*
  • Dim_id*
  • Location
Dimension
Information about dimensions
Columns
  • Dim_id* (Dimension identifier)
  • Dim_name (Dimension name)
  • Dim_scope (Dimension scope)
  • Dim_unit (Dimension unit)
  • Dim_definition (Dimension definition)
  • Dim_result: (Dimension result: the range of plausible values, such as "non-negative real number", "positive integer", or an exhaustive list of labels)
Index
Information about indices
Columns
  • Ind_id* (index identifier)
  • Ind_name (index name)
  • Dim_id (dimension identifier)
Rows
Information about rows of indices
Columns
  • Ind_id* (index identifier)
  • Row_number* (the number of this row in the index)
  • Location (the location along the dimension of this row and index)
Risk_assessment
Attributes of a risk assessment
Columns
  • RA_id* (risk assessment identifier)
  • RA_name
  • RA_scope
  • RA_started (date when the risk assessment was started)
  • RA_finished (date when the risk assessment was finished)
RA_vars
Defines the variables used in a risk assessment
Columns
  • RA_id* (risk assessment identifier)
  • Var_id* (variable identifier)
RA_indices
Defines the indices used in a risk assessment
Columns
  • RA_id* (risk assessment identifier)
  • Ind_id* (index identifier)
Causality
Defines the parents in the causal chain
Columns
  • Var_id*
  • Causality_date (date when the parent list was last changed)
  • Parent_id* (var_id of a parent variable)
Formula
Defines the formulas of the variables
Columns
  • Var_id*
  • Formula_date (date when the formula was last changed)
  • Software* (name of the software able to run the formula)
  • Formula (software code)
Data
Defines the data of the variables
Columns
  • Var_id*
  • Data_date (date when the data was last changed)
  • URL* (location of the data file)
Run
Information about the computer runs
Columns
  • Run_id* (the identifier of the computer run)
  • Run_date (when the run was actually performed successfully)
  • Run_who (who performed/will perform the run)
  • Run_method (what method was/will be used in the run)
  • Planned_run_date (the estimated date for the run)
Run_list
List of variables in a run
Columns
  • Run_id* (the identifier of the computer run)
  • Run_order* (the order in which the variables will be computed)
  • Var_id (the identifier of the variable)

* This column or these columns together uniquely identify the row in the table


Result database is for storing variable results in a way that they can be used independently of the assessment they were created in.

Possible uses of the database

Making value-of-information analyses

Value of information (VOI) is a decision analysis tool for estimating the importance of remaining uncertainty for decision-making. For detailed description, see Value of information. Result database can be used to perform a large number of VOI analyses, because all variables are in the right format for that: as random samples from uncertain variables. The analysis is done by optimising an indicator by adjusting a decision variable so that the variable under analysis is conditionalised to different values. All this can in theory be done in the result database by just listing the indicator, the decision variable, and the variable of interest. Practical tools should be developed for this. After that, systematic VOI analyses can be made over a wide range of environmental health issues.

The improvement of the quality of a variable in time

All results that have once been stored in the result database remain there. Although the old results are not interesting for environmental health assessments after the updated result has been stored, they can be very interesting for other purposes. Some potential uses are listed below:

  • The informativeness and calibration (see performance) can be evaluated for a single variable in time against the newest information.
  • Critical pieces of information that had made major contribution to the informativeness and calibration can be identified afterwards.
  • Large number of variables can be assessed and e.g. following questions can be asked:
    • How much work is needed to make a variable to have reasonable performance for practical applications?
    • What are the critical steps after which the variable performance is saturated, i.e., does not improve much despite additional effort?


Some useful syntax

http://www.baycongroup.com/sql_join.htm


List of variables at dimension and run level

SELECT Variable.var_id, var_name, var_unit, Indices.dim_id, Indices.dim_name, Indices.ind_id, Indices.ind_name, Restat.n, Run.*
FROM Variable, Run_list, Run, Loc_of_result,
   (SELECT var_id, result_id, avg(result) as result, min(result) as minimum, max(result) as maximum, count(sample) as n 
   FROM Result
   GROUP BY result_id) as Restat,
   (SELECT Dimension.Dim_id, Dimension.Dim_name, Rows.Ind_id, Ind_name, row_number, Location.Loc_id, Location 
   FROM `Dimension`, `Location`, `Rows`, `Index` 
   WHERE Dimension.Dim_id = Location.Dim_id and
   Location.Loc_id = Rows.Loc_id and
   `Index`.Ind_id = Rows.Ind_id) as Indices
WHERE Variable.var_id = Restat.var_id and
Restat.result_id = Run_list.result_id and
Run_list.run_id = Run.run_id and
Restat.result_id = Loc_of_result.result_id and
Loc_of_result.loc_id = Indices.loc_id
GROUP BY Indices.ind_id, run_id, Variable.var_id
ORDER BY Variable.var_id, run_id DESC

List of runs for each variable at result_id level

SELECT var_name, Restat.*, var_unit, Indices.*, Run.*
FROM Variable, Run_list, Run, Loc_of_result,
   (SELECT var_id, result_id, avg(result) as result, min(result) as minimum, max(result) as maximum, count(sample) as n 
   FROM Result
   GROUP BY result_id) as Restat,
   (SELECT Dimension.Dim_id, Dimension.Dim_name, Rows.Ind_id, Ind_name, row_number, Location.Loc_id, Location 
   FROM `Dimension`, `Location`, `Rows`, `Index` 
   WHERE Dimension.Dim_id = Location.Dim_id and
   Location.Loc_id = Rows.Loc_id and
   `Index`.Ind_id = Rows.Ind_id) as Indices
WHERE Variable.var_id = Restat.var_id and
Restat.result_id = Run_list.result_id and
Run_list.run_id = Run.run_id and
Restat.result_id = Loc_of_result.result_id and
Loc_of_result.loc_id = Indices.loc_id
ORDER BY run_id DESC, var_id, result_id

The newest sample from a variable to be converted into Analytica

SELECT Variable.var_name, var_unit, result, sample, dim_name, location, run_method, run_date
FROM `Variable` , Result, Loc_of_result, Location, Dimension, Run_list, Run
WHERE Variable.var_name = "Fig_3_cost_by_source"
AND Variable.var_id = Result.var_id
AND Result.result_id = Loc_of_result.result_id
AND Loc_of_result.loc_id = Location.loc_id
AND Location.dim_id = Dimension.dim_id 
AND Result.result_id = Run_list.result_id
AND Run_list.run_id = Run.run_id

The sample of the newest run of each variable

SELECT Newestrun2.*, Result.*
FROM Result, Run_list, (
   SELECT var_id, var_name, run_id, max(run_date) as run_date
   FROM (
      SELECT Variable.var_id, var_name, Run.run_id, run_date
      FROM Variable, Run, Run_list,
         (SELECT *
         FROM Result
         GROUP BY result_id) AS Resrun
      WHERE Variable.var_id = Resrun.Var_id and
      Resrun.result_id = Run_list.result_id and
      Run_list.run_id = Run.run_id
      GROUP  BY Variable.var_id, run_date) as Newestrun
   GROUP BY var_id) as Newestrun2
WHERE Newestrun2.var_id = Result.var_id and
Result.result_id = Run_list.result_id and
Run_list.run_id = Newestrun2.run_id

All indices

SELECT Dimension.Dim_id, Dimension.Dim_name, Rows.Ind_id, Ind_name, row_number, Location.Loc_id, Location 
FROM `Dimension`, `Location`, `Rows`, `Index` 
WHERE Dimension.Dim_id = Location.Dim_id and
Location.Loc_id = Rows.Loc_id and
`Index`.Ind_id = Rows.Ind_id


List all dimensions that have indices, and the indices concatenated:

Select Dim_name, dim_title, dim_unit, Group_concat(Ind_name order by ind_name separator ', ') as Indices 
from Dimension, `Index`
where Dimension.dim_id = `Index`.Dim_id
group by Dim_name
order by Dimension.dim_id


List all indices, and their locations concatenated:

Select Dim_name, Dim_title, Dim_unit, Ind_name, Group_concat(Location order by row_number separator ', ') as Locations 
from `Index`, Location, Rows, Dimension
where `Index`.ind_id= Rows.ind_id and Rows.loc_id = Location.loc_id and `Index`.dim_id = Dimension.dim_id
group by Ind_name
order by Dim_name, `Index`.ind_name

List all variables and runs of these variables, together with dimensions used (concatenated)

This query shows all variables and their runs, and also list all dimensions (concatenated) used for each variable for each run.

SELECT Var_id, Run_id, Var_name, Var_title, GROUP_CONCAT(Dim_name SEPARATOR ', ') as Dimensions, n, Run_method
FROM
   (SELECT Loc_of_result.Var_id, Run_list.Run_id, Var_name, Var_title, Dim_name, n, Run_method
   FROM Loc_of_result, Run_list, Run, Variable, Location, Dimension
   WHERE Loc_of_result.Result_id = Run_list.Result_id 
   AND Run_list.Run_id = Run.Run_id
   AND Loc_of_result.Var_id = Variable.Var_id
   AND Loc_of_result.Loc_id = Location.Loc_id 
   AND Location.Dim_id = Dimension.Dim_id
   GROUP BY Dimension.Dim_id, Loc_of_result.Var_id, Run_list.Run_id
   ORDER BY Loc_of_result.Var_id, Run_list.Run_id) as temp1
GROUP BY Var_id, Run_id


Other queries

This query was used to transform the Var_id data from the table Result to Loc_of_result. This was a one-time operation that is recorded for historical interest only.

UPDATE Loc_of_result, 
  (SELECT Variable.Var_id, Var_name, Loc_of_result.Loc_id, Loc_of_result.Result_id
   FROM Variable, Loc_of_result, Result
   WHERE Variable.Var_id = Result.Var_id and
   Loc_of_result.Result_id = Result.Result_id
   GROUP BY Loc_of_result.Result_id, Loc_of_result.Loc_id) as temp1
SET Loc_of_result.Var_id = temp1.Var_id
WHERE Loc_of_result.Loc_id = temp1.Loc_id and
   Loc_of_result.Result_id = temp1.Result_id

This query updates the column Loc_of_result.N based on the sample size in the table Result. This saves time when this lengthy operation needs not be repeated. This should be performed automatically from time to time (always when the Result table is edited).

UPDATE Loc_of_result, 
   (SELECT Result_id, max(Sample) as n
   FROM Result
   GROUP BY Result_id) as temp1
SET Loc_of_result.N = temp1.n
WHERE Loc_of_result.Result_id = temp1.Result_id