Scientific method

From Opasnet
(Redirected from Open criticism)
Jump to navigation Jump to search


Scientific method refers to techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on gathering observable, empirical and measurable evidence subject to specific principles of reasoning. A scientific method consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses.

Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methodologies of knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses. These steps must be repeatable in order to dependably predict any future results. Theories that encompass wider domains of inquiry may bind many hypotheses together in a coherent structure. This in turn may help form new hypotheses or place groups of hypotheses into context.

Among other facets shared by the various fields of inquiry is the conviction that the process be objective to reduce a biased interpretation of the results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, thereby allowing other researchers the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established.

Hypothesis development and testing in assessments

Assessments can be considered as processes of formulating questions and corresponding hypotheses as answers to the questions, in quite a similar manner as the interrogative model of inquiry (the I-model) developed by Jaakko Hintikka [1] proposes. In assessment there is the principal initial question determined by the goals of assessment and several subordinate questions to which answers are needed in order to approach the principal question. Hypotheses are then developed as answers to these questions. This kind of inquiry and reasoning can be conceptualized as a trialogical framework between the inquirer, other inquirers, and the object of knowledge are inextricably bound up with each other in long term processes of inquiry[2].

The hypothesis-testing approach requires considering assessment as a series of research questions. This leads to a modular structure, where each distinct object describes a particular phenomenon of the world. An object consists of a research question about the phenomenon (scope), information relevant for answering the question (definition), and potential answers to the question (result). The potential answers are treated as hypotheses that are critically evaluated against existing information in the definition.

The hypotheses developed as answers to the questions should be falsifiable. The improvement of the hypothesis then takes place through attempts to falsify the hypothesis or its parts, and corrective actions taken to satisfy the requests brought about by scientific criticism. Formal argumentation can be used to organize scientific criticism and resolve disputes that come up during the process of hypothesis development and challenging.

In principle, there can be two kinds of assessment. In problem-driven assessments the principal question is derived from an identified information need in the society. This assessment type could also be called demand-pull assessment. Exploratory assessments start from the pint of view of an existing knowledge base and rather focus on generating and evaluating questions that can be provided answers to by means of assessment. This assessment type could also be called knowledge-push assessment. Despite the quite obvious difference in perspective, both assessments turn out as an interplay of making questions and developing hypotheses as answers to them.

Scientific method in developing assessment methods

The current assessment methods are based on extensive scientific expertise and practical experience. They are typically created by lengthy processes involving the best experts that seek a consensus about the best practices. When applied, these methods generally produce fairly solid assessments. Here, solid means an assessment product that holds well against scientific criticism after it is published. scientific criticism is a process where falsifiable predictions (hypotheses) are tested empirically, and hypotheses that are inconsistent with observations are rejected.

However, the assessment method itself or the process of making an assessment is rarely subject to scientific criticism. It is, however, possible to come up with a system where the properties of an assessment method are constantly subject to scientific criticism and the method is still able to produce solid assessments. Both the product and the method producing it should be subject to scientific criticism at all times. Ideally, scientific criticism may be presented by anyone, and thus the issue should be considered from an open participation point of view.

An assessment method is treated in the same manner, as a modular compilation of sub-modules with scope, definition and result. However, methods cannot be directly evaluated against observations. Therefore, explicit performance criteria for assessment methods need to be developed.

See also

References

  1. Hintikka, J. 1985. True and False Logics of Scientific Discovery, in J. Hintikka and F.J. Vandamme (eds.): Logic of Discovery and Logic of Discourse, Plenum, New York.
  2. Paavola, S., Hakkarainen, K. and Sintonen, M. 2006. Abduction with Dialogical and Trialogical Means. Logic Jnl IGPL. 2006; 14: 137-150.

Related files

<mfanonymousfilelist></mfanonymousfilelist>