Talk:End user evaluation
Contents of the evaluation
Note! This page contains (temporarily) the information about what is evaluated and why, while the main page only contains the questionnaire for the end users. After the end of the survey, the survey design and results will be put back to the main page, while the talk page will contain discussions about the evaluation.
How do the end users see the current status and future prospects of Opasnet? Specifically, the terms are used with these interpretations:
- End user: a person using Opasnet from one of the following groups:
- researchers (producing scientific information),
- [risk] assessors (producing assessments),
- authorities (utilising assessments),
- experts of an area related to Opasnet content,
- NGO representatives and other activists,
- general public and other interested individuals.
- Opasnet: a web workspace with the following parts or functionalities:
- Opasnet wiki (a website for mass collaboration),
- Opasnet Base (a database for data and results),
- open assessment (a method for performing assessments on the Internet),
- case studies (actual assessments),
- information in variable and encyclopedia pages.
- Current status:
- The appearance of the website.
- The easiness to find information.
- The amount of interesting content.
- The quality of content.
- The utility of the content for end user needs.
- Future prospects:
- End user expectation that she will use Opasnet for her own needs.
These pages should be evaluated as well, but only after the results are available. Benefit-risk assessment of methyl mercury and omega-3 fatty acids in fish Benefit-risk assessment of fish consumption for Beneris
The evaluation will contain of a survey about a) Opasnet and b) Beneris case studies, and an analysis of Opasnet page loads (by Google analytics).
The statistics of user visits and contributions in Opasnet, Heande, and Beneris wiki are studied.
- Which pages are being used the most?
- Who are the main users (readers)?
- Who are the main contributors (contribution scores)?
- What is the impact of advertisement (of particular pages)?
Comments about the first draft of questionnaire on Sept 10, 2009
I think there are two ways to go about developing this questionnaire draft: 1) making small improvements within the frame that it is in right now, 2) re-thinking the frame of the whole questionnaire. I give some comments that could helpful whichever way is chosen.
Some tweaking ideas
- In "questions about Opasnet" the propositions to be rated should all
be either in the form of statements or questions, not mixed.
- The list of "questions about open assessment" seems very short. how
about questions regarding hypothesis formulation and testing and cultivation of information through argumentation and falsification etc.?
- I can not figure out what is actually asked (and why) in the last two
question sets, Opasnet analytics and scope. They need to be reformulated in some way.
- In general it is probably good if most of the questions can be put
into the form of giving scores to well-formulated propositions. Some open comment questions are probably needed as well, but can not be many.
Re-framing There are questions regarding some actual information content in Opasnet, the way of making assessment/producing information, and the tool/workspace for facilitating both creation and use of knowledge. So, there are several points of view that could be taken. Which points of view are of most interest here? Which points of view are most familiar to the intended respondents? Which points of view would give most useful information? The different aspects are naturally inter-related, but the chosen approach can make things look quite different both in the eyes of the questioners and the respondents.
In any case, we do have a framework for considering the goodness of assessments and their outputs (or any knowledge production and their products for that matter), known as the properties of good assessment (quality of content, applicability, efficiency). I would like to see that framework applied in this questionnaire. The overall goodness is a function of all the properties and the different aspects considered in this questionnaire (information in Opasnet, ways and means of producing the information in Opasnet, use of information in Opasnet, ...) all have their own kinds of influences in different properties, and consequently the overall performance.
If the idea in this questionnaire is to look more into the functionalities of Opasnet and open assessment, rather than what good comes out of using the information produced by these means, the emphasis can be put on assessing those specific properties that mostly relate to those aspects. However, in the end, the goodness of the methods and tools for information production can only be determined according to what results (resulted) from the use of the information that was produced with them. Therefore, e.g. "easiness of using Opasnet search functions" has importance only in the context of producing and conveying useful knowledge to those who need it.
So, all in all, I suggest framing the questionnaire according to the properties of good assessment:
- Quality of content
- internal efficiency
- external efficiency
Naturally, the property titles in the above list should be replaced with specific questions/question sets that tie the questionnaire into the specific context of the questionnaire focus. I think that one crucial thing to consider is, whether the question formulation and respondents approach the issue primarily from the point of view information production or information use, i.e. which kind of "end-users" are in question, information producers or information users. There are some pages both in Opasnet and in Heande about properties of good assessment / evaluation of assessment performance, if you want to have a closer look, but I can not say they are in a very good order currently.
I think, in general, Mikko's comments sound o.k. to me. Although I am not sure how you can translate all this into a language that "normal" people understand. I don't think you should go into more philosophical detail than you did with your first suggestion.
I think you could at least restructure the fish case questions according to the assessment structure (like Mikko suggests for the whole questionnaire).
Further suggestions (not ordered or classified):
- Is the purpose of Opasnet clear to you?
- Is the structure clear to you?
- What enhancements would you like to see in the structure?
- What enhancements would you like to see in ... (-> all the points that you already mentioned)?
- Is the content understandable to you? (in general, or for specific pages)
- Overall potential of Opasnet to become a major source of
environmental health information -> reformulating: Do you think that Opasnet could become a major source of environmental health information?
- If you don't believe it or you are not sure: Where to you think might the obstacles be? Could you make any suggestions to tackle them?
"How do the end users see the current status and future prospects of Opasnet? Specifically, the terms are used with these interpretations:" -> Are these questions or suggestions for an evaluation of the results???
"Easiness of use from technical point of view" -> split this into at least these points:
- general usage: formatting or wysiwig editor
- usage of argumentation structure / buttons
"End user expectation that she will use Opasnet for her own" -> look out: "she" is only a woman. Maybe "the users" and "they". That's more neutral.
Hints for improving the questionnaire and interpreting responses
After trying to answer all the questionnaire questions, I found that there is some need for tweaking some question/statement formulations. Below some examples I remember coming into my mind while working out my responses. I think these, and hopefully comments from others as well, can be useful in both 1) interpreting collected responses and 2) improving the question for further uses.
- Opasnet, question 9: Agreement with the statement depends on which of the familiar methods you consider in relation to Opasnet. Most can probably think of several alternative methods, which all differ in terms of time saving/consuming. So if one considers Opasnet in relation to group of alternative methods, the response is rather likely to be a range of values (2-4) than one single value (e.g. 3 as the average).
- Open assessment, question 4: Good and practical characterize OA (or anything) in a somewhat different way. One is likely to attribute distinct values to the goodness and practicality of OA instead of an aggregate value. (5 & 3, instead of 4 in total)
- Open assessment, question 5: There are different kinds of end-users one could think of, and their interpretation of OA acceptability can vary a lot. Maybe the question could explicitly state that one is expected to give an average estimate. Alternatively, different (potential, intended, ...) end-user groups could be listed and one is expected to estimate acceptability of OA in different groups. Also an explicit statement of who are the end-users referred to in the question would make the interpretation of the question and responses to it less ambiguous.