Uploading to Opasnet Base
Moderator:Jouni (see all) |
|
Upload data
|
Uploading to Opasnet Base helps you understand what data could and should be updated to Opasnet Base and what the recommended data structures and formats are. For technical instructions how to use the current upload software, see Opasnet Base connection. For a general description about the database, see Opasnet Base and for technical details about the database, see [[Opasnet Base structure). For details about downloading data, see Opasnet Base UI.
Scope
What data could and should be updated to Opasnet Base and what are the recommended data structures and formats?
Definition
This section describes what the inputs are and the different ways they can be formatted for a successful upload.
What are the topics about which data can be uploaded into the Base? Well, basically any topic that provides useful information for any decision-making situation that has societal relevance. This sounds like a very wide definition, and it is. The data may be about which car models are environmentally friendly. It may be about pollutants in food. It may be new ideas about a societally just value-added tax.
What are, then, the data structures that are allowed? Although not all structures are allowed, almost any data can be easily transformed into the structure that is used in Opasnet Base. The data must be formatted as a two-dimensional table with one observation at each row. Cells of the table may contain either text or numerical values. Columns contain either explanations (or independent variables in statistics) or the actual observations (or dependent variables in statistics). This difference is important. Explanations are things that are fixed before the actual observation, while observations are those that are actually measured or observed. See the following table as example.
City | Year | Sex | Body mass index BMI (kg/m2) | Blood cholesterol (mM) |
---|---|---|---|---|
London | 2010 | Male | 20 | 3.31 |
London | 2010 | Male | 25 | 6.83 |
London | 2010 | Female | 20 | 5.55 |
London | 2010 | Female | 30 | 5.42 |
London | 2010 | Female | 25 | 4.19 |
New York | 2010 | Male | 22 | 3.33 |
New York | 2010 | Male | 26 | 5.84 |
New York | 2010 | Female | 28 | 5.67 |
New York | 2010 | Female | 26 | 4.52 |
New York | 2010 | Female | 24 | 5.67 |
The table seems unambiguous at the first glance, but it is impossible to interpret it correctly without knowing, which columns are explanations and which are observations. This may be a study performed in London and New York in 2010, where random people were asked for a blood test. In this case, city and year are explanations, and other columns are observations. However, the data may as well be summary statistics from a larger study, where the studied individuals were grouped in these cities based on their sex and body mass index (BMI), and the mean cholesterol in each group is the only observation column. (The data implies that London used BMI groups 5 kg/m2 wide, while New York used BMI groups kg/m2 wide, but you cannot know based on the data only.)
However, if you know which columns are explanations and which are observations, you can actually deduce many important aspects of the design of the study from which the data came. Of course many things must be explained elsewhere like how people were selected and what the base population studied was.
Another important feature of the data is, whether it is deterministic or probabilistic. With deterministic data, it is assumed that each row is an independent piece of information. With probabilistic data, it is assumed that there are random draws from a pool of potential observations (like an urn full of balls with different colours, each having the same probability of being picked). However, all observations are not picked from the same pool, but from several pools uniquely defined by the explanation columns.
Let's assume that we look at the table above and learn that it is a probabilistic data with two explanation columns. We then know that people were randomly picked from either London or New York in 2010 (explanation columns are always the first ones on the left side). In London, two happened to be males and three females, with measured BMIs and cholesterol levels. Thus, there are five observations from the pool defined by "London 2010", and also five observations from "New York 2010." These are numbered 1..5. If the data is deterministic, the observation number is 0 for all rows.
Inputs
Overview
This method is used to save original data or model results (a study or a variable, respectively) into the Opasnet Base.
If an object with the same Ident already exists in the Opasnet Base, the information will be added to that object. Before you start, make sure that you have created an object page in the Opasnet wiki for each object (study or variable) you want to upload.
You must give your Opasnet username to upload data. The username will be stored together with the upload information. You must also know the writer password for the Opasnet Base if you are not using the AWP web interface.
Object info contains the most important metadata about your data. All information should be between 'quotation marks' so that they are not mistakenly interpreted as Analytica node identifiers.
- Analytica identifier (only needed for type 2, i.e. when Analytica nodes are uploaded): the identifier(s) of the node(s).
- Ident is the page identifier in Opasnet. If your study or variable does not already have a page, you must create one before uploading the data. The identifier is found in the metadata box in the top right corner of the Opasnet page. Note that if an entry with the identical Ident already exists in the Opasnet Base, the metadata will NOT be updated but the existing metadata will be used instead.
- Name: a description that may be longer than an identifier. This is typically identical to the respective page in Opasnet.
- Unit: unit of measurement. This is the same for the whole object. However, if different observation columns have different units, this must be clearly explained in the respective Opasnet page.
- # Explanation cols: the number of columns that contain explanatory information (see below).
- Observation name: a common name for all observation columns. If omitted, 'Observation' is used. See below for more details.
- If "Probabilistic?" is Yes or 1, then each row of the data table is considered a random draw from a data pool with the explanation values that are listed on that row.
- Append to upload: Typically, a data upload happens in a situation where corrected or improved data is entered, and the already existing data (if any) is replaced by the new data. (The old data is not deleted, and it can be retrieved if needed.) In this case, the answer to the question "Replace data?" should be Yes. On the other hand, sometimes the data is entered as smaller parts, and the whole data consists of several uploads. Then you should answer No, and the data will be shown as one piece of data with the previous upload(s) for the user.
The data can be entered with four different ways. All of them are equal, it only depends on which is the easiest way to enter the data table(s).
- Copy-paste a table: The data are copy-pasted into the field 'Copy-paste'. The source of the data can be any spreadsheet or text processor, as long as each column is separated by a tab, and each row by a line break. The first row must contain the column names. Note that the whole block of data must be between 'quotation marks'. The problem with this is that it can only handle fairly small tables (less than about 30 kiB).
- Analytica model: Give identifiers of Analytica model nodes, and the node results will be uploaded. You can upload a whole model at one time.
- Node to be formatted as data table: Give the column names and number of rows, and a table with the right size will be created. Then, you can copy-paste your data into this table. The benefit is that larger data can be copied.
- Ready-made data table node: If you already have your data in the right format for a modified data table, you can simply tell the name of that Analytica node, and it will be directly used. Requires advanced knowledge about the specifications of a modified data table. This is NOT the same as the input data table format described above.
Advanced options:
There are advanced methods for uploading data. 'Upload non-public data' stores the actual data (the values in the observation columns) into a database that requires a password for reading. However, all other information (including upload metadata and the explanation data) are openly available. For very large data, you can also create data files that match the data format and import those to Opasnet Base (although you need a specific password to be able to import files).
Platform: You must choose THL computer if you are not using the AWP web interface.
Outputs
For learning how to access the data in Opasnet Base, read Opasnet Base and for technical details Opasnet Base UI.
Procedure
When you know what to put to the input fields, rest is very straightforward. You just check that you have entered the data correctly by pressing Check that your data looks sensible: Data table button. You may correct any errors at this point. Then, you just press Upload to Opasnet Base. It may take several minutes especially with large data.
Note! Currently you can only upload data if you have Analytica Enterprise in your machine and your computer is located in the THL network. You need the file Opasnet base connection.ANA. You can even upload a whole Analytica model at one time. You have to contact Juha Villman or Jouni to get the password for uploading data.
If you don't have Analytica Enterprise, you can still provide data to Opasnet Base. Just create a study or variable page that contains your information about your data, upload a data file to Opasnet (or just copy it to the page if the data is small), make a link on the page to the file on the page you created, and contact Juha Villman or Jouni so that they can upload the data for you.