Why scientists use the scientific method




















Unfortunately, this filtering process can cause a scientist to prefer one outcome over another. For someone trying to solve a problem around the house, succumbing to these kinds of biases is not such a big deal. But in the scientific community, where results have to be reviewed and duplicated, bias must be avoided at all costs. It provides an objective, standardized approach to conducting experiments and, in doing so, improves their results. By using a standardized approach in their investigations, scientists can feel confident that they will stick to the facts and limit the influence of personal, preconceived notions.

Even with such a rigorous methodology in place, some scientists still make mistakes. For example, they can mistake a hypothesis for an explanation of a phenomenon without performing experiments. Or they can fail to accurately account for errors, such as measurement errors. Or they can ignore data that does not support the hypothesis.

Gregor Mendel , an Austrian priest who studied the inheritance of traits in pea plants and helped pioneer the study of genetics, may have fallen victim to a kind of error known as confirmation bias.

Confirmation bias is the tendency to see data that supports a hypothesis while ignoring data that does not. Some argue that Mendel obtained a certain result using a small sample size, then continued collecting and censoring data to make sure his original result was confirmed.

Goodman and Hempel both point to paradoxes inherent in standard accounts of confirmation. Recent attempts at explaining how observations can serve to confirm a scientific theory are discussed in section 4 below. The standard starting point for a non-inductive analysis of the logic of confirmation is known as the Hypothetico-Deductive H-D method. In its simplest form, a sentence of a theory which expresses some hypothesis is confirmed by its true consequences.

As noted in section 2 , this method had been advanced by Whewell in the 19 th century, as well as Nicod and others in the 20 th century. Some hypotheses conflicted with observable facts and could be rejected as false immediately. Others needed to be tested experimentally by deducing which observable events should follow if the hypothesis were true what Hempel called the test implications of the hypothesis , then conducting an experiment and observing whether or not the test implications occurred.

If the experiment showed the test implication to be false, the hypothesis could be rejected. If the experiment showed the test implications to be true, however, this did not prove the hypothesis true. The degree of this support then depends on the quantity, variety and precision of the supporting evidence.

Falsification is deductive and similar to H-D in that it involves scientists deducing observational consequences from the hypothesis under test. For Popper, however, the important point was not the degree of confirmation that successful prediction offered to a hypothesis.

The crucial thing was the logical asymmetry between confirmation, based on inductive inference, and falsification, which can be based on a deductive inference. This simple opposition was later questioned, by Lakatos, among others. See the entry on historicist theories of scientific rationality.

Popper stressed that, regardless of the amount of confirming evidence, we can never be certain that a hypothesis is true without committing the fallacy of affirming the consequent.

Instead, Popper introduced the notion of corroboration as a measure for how well a theory or hypothesis has survived previous testing—but without implying that this is also a measure for the probability that it is true.

Popper was also motivated by his doubts about the scientific status of theories like the Marxist theory of history or psycho-analysis, and so wanted to demarcate between science and pseudo-science. Popper saw this as an importantly different distinction than demarcating science from metaphysics. The latter demarcation was the primary concern of many logical empiricists. Popper used the idea of falsification to draw a line instead between pseudo and proper science.

Science was science because its method involved subjecting theories to rigorous tests which offered a high probability of failing and thus refuting the theory.

A commitment to the risk of failure was important. Avoiding falsification could be done all too easily. If a consequence of a theory is inconsistent with observations, an exception can be added by introducing auxiliary hypotheses designed explicitly to save the theory, so-called ad hoc modifications.

This Popper saw done in pseudo-science where ad hoc theories appeared capable of explaining anything in their field of application. In contrast, science is risky. If observations showed the predictions from a theory to be wrong, the theory would be refuted.

Hence, scientific hypotheses must be falsifiable. The more potential falsifiers of a hypothesis, the more falsifiable it would be, and the more the hypothesis claimed. Conversely, hypotheses without falsifiers claimed very little or nothing at all. Originally, Popper thought that this meant the introduction of ad hoc hypotheses only to save a theory should not be countenanced as good scientific method.

These would undermine the falsifiabililty of a theory. However, Popper later came to recognize that the introduction of modifications immunizations, he called them was often an important part of scientific development. Responding to surprising or apparently falsifying observations often generated important new scientific insights. The ad hoc hypothesis of an outer planet explained the disagreement and led to further falsifiable predictions. Popper sought to reconcile the view by blurring the distinction between falsifiable and not falsifiable, and speaking instead of degrees of testability Popper 41f.

From the s on, sustained meta-methodological criticism emerged that drove philosophical focus away from scientific method. A brief look at those criticisms follows, with recommendations for further reading at the end of the entry. History, if viewed as a repository for more than anecdote or chronology, could produce a decisive transformation in the image of science by which we are now possessed.

See the entry on the Vienna Circle. Kuhn shares with other of his contemporaries, such as Feyerabend and Lakatos, a commitment to a more empirical approach to philosophy of science. Namely, the history of science provides important data, and necessary checks, for philosophy of science, including any theory of scientific method.

The history of science reveals, according to Kuhn, that scientific development occurs in alternating phases. During normal science, the members of the scientific community adhere to the paradigm in place.

Their commitment to the paradigm means a commitment to the puzzles to be solved and the acceptable ways of solving them. Confidence in the paradigm remains so long as steady progress is made in solving the shared puzzles. An important part of a disciplinary matrix is the set of values which provide the norms and aims for scientific method. The main values that Kuhn identifies are prediction, problem solving, simplicity, consistency, and plausibility. An important by-product of normal science is the accumulation of puzzles which cannot be solved with resources of the current paradigm.

Once accumulation of these anomalies has reached some critical mass, it can trigger a communal shift to a new paradigm and a new phase of normal science. Importantly, the values that provide the norms and aims for scientific method may have transformed in the meantime. Method may therefore be relative to discipline, time or place. Feyerabend also identified the aims of science as progress, but argued that any methodological prescription would only stifle that progress Feyerabend Heroes of science, like Galileo, are shown to be just as reliant on rhetoric and persuasion as they are on reason and demonstration.

Others, like Aristotle, are shown to be far more reasonable and far-reaching in their outlooks then they are given credit for. More generally, even the methodological restriction that science is the best way to pursue knowledge, and to increase knowledge, is too restrictive. Feyerabend suggested instead that science might, in fact, be a threat to a free society, because it and its myth had become so dominant Feyerabend An even more fundamental kind of criticism was offered by several sociologists of science from the s onwards who rejected the methodology of providing philosophical accounts for the rational development of science and sociological accounts of the irrational mistakes.

Instead, they adhered to a symmetry thesis on which any causal explanation of how scientific knowledge is established needs to be symmetrical in explaining truth and falsity, rationality and irrationality, success and mistakes, by the same causal factors see, e. Movements in the Sociology of Science, like the Strong Programme, or in the social dimensions and causes of knowledge more generally led to extended and close examination of detailed case studies in contemporary science and its history.

See the entries on the social dimensions of scientific knowledge and social epistemology. As they saw it therefore, explanatory appeals to scientific method were not empirically grounded. A late, and largely unexpected, criticism of scientific method came from within science itself. Beginning in the early s, a number of scientists attempting to replicate the results of published experiments could not do so.

There may be close conceptual connection between reproducibility and method. For example, if reproducibility means that the same scientific methods ought to produce the same result, and all scientific results ought to be reproducible, then whatever it takes to reproduce a scientific result ought to be called scientific method. Space limits us to the observation that, insofar as reproducibility is a desired outcome of proper scientific method, it is not strictly a part of scientific method.

See the entry on reproducibility of scientific results. By the close of the 20 th century the search for the scientific method was flagging. Despite the many difficulties that philosophers encountered in trying to providing a clear methodology of conformation or refutation , still important progress has been made on understanding how observation can provide evidence for a given theory.

Work in statistics has been crucial for understanding how theories can be tested empirically, and in recent decades a huge literature has developed that attempts to recast confirmation in Bayesian terms. Here these developments can be covered only briefly, and we refer to the entry on confirmation for further details and references. Statistics has come to play an increasingly important role in the methodology of the experimental sciences from the 19 th century onwards. At that time, statistics and probability theory took on a methodological role as an analysis of inductive inference, and attempts to ground the rationality of induction in the axioms of probability theory have continued throughout the 20 th century and in to the present.

Developments in the theory of statistics itself, meanwhile, have had a direct and immense influence on the experimental method, including methods for measuring the uncertainty of observations such as the Method of Least Squares developed by Legendre and Gauss in the early 19 th century, criteria for the rejection of outliers proposed by Peirce by the mid th century, and the significance tests developed by Gosset a. These developments within statistics then in turn led to a reflective discussion among both statisticians and philosophers of science on how to perceive the process of hypothesis testing: whether it was a rigorous statistical inference that could provide a numerical expression of the degree of confidence in the tested hypothesis, or if it should be seen as a decision between different courses of actions that also involved a value component.

This led to a major controversy among Fisher on the one side and Neyman and Pearson on the other see especially Fisher , Neyman and Pearson , and for analyses of the controversy, e. Introducing the distinction between the error of rejecting a true hypothesis type I error and accepting a false hypothesis type II error , they argued that it depends on the consequences of the error to decide whether it is more important to avoid rejecting a true hypothesis or accepting a false one.

Hence, Fisher aimed for a theory of inductive inference that enabled a numerical expression of confidence in a hypothesis. To him, the important point was the search for truth, not utility. In contrast, the Neyman-Pearson approach provided a strategy of inductive behaviour for deciding between different courses of action.

Here, the important point was not whether a hypothesis was true, but whether one should act as if it was. Similar discussions are found in the philosophical literature. On the one side, Churchman and Rudner argued that because scientific hypotheses can never be completely verified, a complete analysis of the methods of scientific inference includes ethical judgments in which the scientists must decide whether the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis, which again will depend on the importance of making a mistake in accepting or rejecting the hypothesis.

Others, such as Jeffrey and Levi disagreed and instead defended a value-neutral view of science on which scientists should bracket their attitudes, preferences, temperament, and values when assessing the correctness of their inferences. For more details on this value-free ideal in the philosophy of science and its historical development, see Douglas and Howard For a broad set of case studies examining the role of values in science, see e.

Hence, for Bayesians probabilities refer to a state of knowledge, whereas for frequentists probabilities refer to frequencies of events see, e. Bayesianism aims at providing a quantifiable, algorithmic representation of belief revision, where belief revision is a function of prior beliefs i.

The probability that a particular hypothesis is true is interpreted as a degree of belief, or credence, of the scientist. There will also be a probability and a degree of belief that a hypothesis will be true conditional on a piece of evidence an observation, say being true. Bayesianism proscribes that it is rational for the scientist to update their belief in the hypothesis to that conditional probability should it turn out that the evidence is, in fact, observed see, e.

Originating in the work of Neyman and Person, frequentism aims at providing the tools for reducing long-run error rates, such as the error-statistical approach developed by Mayo that focuses on how experimenters can avoid both type I and type II errors by building up a repertoire of procedures that detect errors if and only if they are present. Both Bayesianism and frequentism have developed over time, they are interpreted in different ways by its various proponents, and their relations to previous criticism to attempts at defining scientific method are seen differently by proponents and critics.

The literature, surveys, reviews and criticism in this area are vast and the reader is referred to the entries on Bayesian epistemology and confirmation. Attention to scientific practice, as we have seen, is not itself new. However, the turn to practice in the philosophy of science of late can be seen as a correction to the pessimism with respect to method in philosophy of science in later parts of the 20 th century, and as an attempted reconciliation between sociological and rationalist explanations of scientific knowledge.

Much of this work sees method as detailed and context specific problem-solving procedures, and methodological analyses to be at the same time descriptive, critical and advisory see Nickles for an exposition of this view. The following section contains a survey of some of the practice focuses. In this section we turn fully to topics rather than chronology.

A problem with the distinction between the contexts of discovery and justification that figured so prominently in philosophy of science in the first half of the 20 th century see section 2 is that no such distinction can be clearly seen in scientific activity see Arabatzis Thus, in recent decades, it has been recognized that study of conceptual innovation and change should not be confined to psychology and sociology of science, but are also important aspects of scientific practice which philosophy of science should address see also the entry on scientific discovery.

Looking for the practices that drive conceptual innovation has led philosophers to examine both the reasoning practices of scientists and the wide realm of experimental practices that are not directed narrowly at testing hypotheses, that is, exploratory experimentation. Examining the reasoning practices of historical and contemporary scientists, Nersessian has argued that new scientific concepts are constructed as solutions to specific problems by systematic reasoning, and that of analogy, visual representation and thought-experimentation are among the important reasoning practices employed.

These ubiquitous forms of reasoning are reliable—but also fallible—methods of conceptual development and change. On her account, model-based reasoning consists of cycles of construction, simulation, evaluation and adaption of models that serve as interim interpretations of the target problem to be solved. Often, this process will lead to modifications or extensions, and a new cycle of simulation and evaluation.

However, Nersessian also emphasizes that. Nersessian Thus, while on the one hand she agrees with many previous philosophers that there is no logic of discovery, discoveries can derive from reasoned processes, such that a large and integral part of scientific practice is.

Drawing largely on cases from the biological sciences, much of their focus has been on reasoning strategies for the generation, evaluation, and revision of mechanistic explanations of complex systems. Addressing another aspect of the context distinction, namely the traditional view that the primary role of experiments is to test theoretical hypotheses according to the H-D model, other philosophers of science have argued for additional roles that experiments can play.

The notion of exploratory experimentation was introduced to describe experiments driven by the desire to obtain empirical regularities and to develop concepts and classifications in which these regularities can be described Steinle , ; Burian ; Waters However the difference between theory driven experimentation and exploratory experimentation should not be seen as a sharp distinction.

Theory driven experiments are not always directed at testing hypothesis, but may also be directed at various kinds of fact-gathering, such as determining numerical parameters. Vice versa , exploratory experiments are usually informed by theory in various ways and are therefore not theory-free. Instead, in exploratory experiments phenomena are investigated without first limiting the possible outcomes of the experiment on the basis of extant theory about the phenomena.

The field of omics just described is possible because of the ability of computers to process, in a reasonable amount of time, the huge quantities of data required. Computers allow for more elaborate experimentation higher speed, better filtering, more variables, sophisticated coordination and control , but also, through modelling and simulations, might constitute a form of experimentation themselves.

Here, too, we can pose a version of the general question of method versus practice: does the practice of using computers fundamentally change scientific method, or merely provide a more efficient means of implementing standard methods?

This has epistemological implications, regarding what we can know, and how we can know it. To have confidence in the results, computer methods are therefore subjected to tests of verification and validation. The distinction between verification and validation is easiest to characterize in the case of computer simulations. In a typical computer simulation scenario computers are used to numerically integrate differential equations for which no analytic solution is available.

The equations are part of the model the scientist uses to represent a phenomenon or system under investigation. Verifying a computer simulation means checking that the equations of the model are being correctly approximated.

Validating a simulation means checking that the equations of the model are adequate for the inferences one wants to make on the basis of that model. A number of issues related to computer simulations have been raised. The identification of validity and verification as the testing methods has been criticized. Oreskes et al. The distinction itself is probably too clean, since actual practice in the testing of simulations mixes and moves back and forth between the two Weissart ; Parker a; Winsberg Computer simulations do seem to have a non-inductive character, given that the principles by which they operate are built in by the programmers, and any results of the simulation follow from those in-built principles in such a way that those results could, in principle, be deduced from the program code and its inputs.

The status of simulations as experiments has therefore been examined Kaufmann and Smarr ; Humphreys ; Hughes ; Norton and Suppe Mayo ; Parker b.

At the same time, many of these calculations are approximations to the calculations which would be performed first-hand in an ideal situation. Both factors introduce uncertainties into the inferences drawn from what is observed in the simulation. For many of the reasons described above, computer simulations do not seem to belong clearly to either the experimental or theoretical domain.

Rather, they seem to crucially involve aspects of both. It should also be noted that the debates around these issues have tended to focus on the form of computer simulation typical in the physical sciences, where models are based on dynamical equations. Other forms of simulation might not have the same problems, or have problems of their own see the entry on computer simulations in science. This has resulted in an intense debate on the relative merit of data-driven and hypothesis-driven research for samples, see e.

Mazzocchi or Succi and Coveney For a detailed treatment of this topic, we refer to the entry scientific research and big data.

Despite philosophical disagreements, the idea of the scientific method still figures prominently in contemporary discourse on many different topics, both within science and in society at large. Discourse on scientific method also typically arises when there is a need to distinguish between science and other activities, or for justifying the special status conveyed to science.

One of the settings in which the legend of a single, universal scientific method has been particularly strong is science education see, e. Dewey The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science.

We do not have a fixed scientific method to rally around and defend. Reference to the scientific method has also often been used to argue for the scientific nature or special status of a particular activity. Philosophical positions that argue for a simple and unique scientific method as a criterion of demarcation, such as Popperian falsification, have often attracted practitioners who felt that they had a need to defend their domain of practice.

For example, references to conjectures and refutation as the scientific method are abundant in much of the literature on complementary and alternative medicine CAM —alongside the competing position that CAM, as an alternative to conventional biomedicine, needs to develop its own methodology different from that of science.

Also within mainstream science, reference to the scientific method is used in arguments regarding the internal hierarchy of disciplines and domains. A frequently seen argument is that research based on the H-D method is superior to research based on induction from observations because in deductive inferences the conclusion follows necessarily from the premises. See, e. In some areas of science, scholarly publications are structured in a way that may convey the impression of a neat and linear process of inquiry from stating a question, devising the methods by which to answer it, collecting the data, to drawing a conclusion from the analysis of data.

However, scientific publications do not in general reflect the process by which the reported scientific results were produced. Publications of research results, they argue, are retrospective reconstructions of these activities that often do not preserve the temporal order or the logic of these activities, but are instead often constructed in order to screen off potential criticism see Schickore for a review of this work.

Philosophical positions on the scientific method have also made it into the court room, especially in the US where judges have drawn on philosophy of science in deciding when to confer special status to scientific expert testimony. Further, referring to works of Popper and Hempel the court stated that.

Justice Blackmun, Daubert v. The difficulties around identifying the methods of science are also reflected in the difficulties of identifying scientific misconduct in the form of improper application of the method or methods of science. After that, form a hypothesis. A hypothesis is a potential explanation to your question.

Predict what the hypothesis may lead to and conduct an experiment to test it out. Analyze the data to draw a conclusion from your findings. Share your results. Then, you ask yourself: How can I solve this problem? Predict more specifics: Dead battery Ignition issue Empty gas tank Next, test your predictions: Turn on the headlights Check spark plug wires Dip a stick into the gas tank.

Analyze your results: Headlights work Strong ignition spark No gas on the dipstick, even though the gas gauge reads half-full Then, draw a conclusion.



0コメント

  • 1000 / 1000