20th World Congress of Philosophy Logo

Philosophy of Science

Abduction and Hypothesis Withdrawal in Science

Lorenzo Magnani
University of Pavia, Pavia, Italy
lorenzo@philos.unipv.it

bluered.gif (1041 bytes)

ABSTRACT: This paper introduces an epistemological model of scientific reasoning which can be described in terms of abduction, deduction and induction. The aim is to emphasize the significance of abduction in order to illustrate the problem-solving process and to propose a unified epistemological model of scientific discovery. The model first describes the different meanings of the word abduction (creative, selective, to the best explanation, visual) in order to clarify their significance for epistemology and artificial intelligence. In different theoretical changes in theoretical systems we witness different kinds of discovery processes operating. Discovery methods are "data-driven," "explanation-driven" (abductive), and "coherence-driven" (formed to overwhelm contradictions). Sometimes there is a mixture of such methods: for example, an hypothesis devoted to overcome a contradiction is found by abduction. Contradiction, far from damaging a system, help to indicate regions in which it can be changed and improved. I will also consider a kind of "weak" hypothesis that is hard to negate and the ways for making it easy. In these cases the subject can "rationally" decide to withdraw his or her hypotheses even in contexts where it is "impossible" to find "explicit" contradictions and anomalies. Here, the use of negation as failure (an interesting technique for negating hypotheses and accessing new ones suggested by artificial intelligence and cognitive scientists) is illuminating

bluered.gif (1041 bytes)

I. Abduction and Scientific Discovery

Philosophers of science in the twentieth century have traditionally distinguished between the logic of discovery and the logic of justification. Most have concluded that no logic of discovery exists and, moreover, that a rational model of discovery is impossible. In short, scientific discovery is irrational and there is no reasoning to hypotheses. The work of Simon, Langley, Bradshaw, and Zytkow (Langley et. al, 1987) showed that methods for discovery could be found that were computationally adequate for rediscovering empirical laws. The general goal is not the full simulation of scientists, but the making of discoveries about the world, using methods that extend human cognitive capacities. The goal is to build prosthetic scientists: just as telescopes are designed to extend the sensory capacity of humans, computational models of scientific discovery and reasoning are designed to extend their cognitive capacity.

At present, computational models of scientific discovery and theory formation play a prominent role in shedding light on the transformations of rational conceptual systems. A new abstraction paradigm aimed at unifying the different perspectives and providing some design insights for future ones is proposed here.

Abduction is becoming an increasingly popular term in AI (Peng and Reggia, 1987a and b, Pople, 1973, Reggia, Dana and Pearl, 1983, Thagard, 1988 and 1992) especially in the field of medical knowledge-based systems (KBSs) (Josephson, Chandrasekaran, Smith and Tanner, 1986; Josephson and Josephson, 1994, Magnani, 1992 and 1997a, Ramoni, Stefanelli, Magnani and Barosi, 1992). The type of inference called abduction was studied by Aristotelian syllogistics, and later on by mediaeval reworkers of syllogism. In the last century abduction was once again studied closely, by Peirce (Peirce 1931-1958). Peirce interpreted abduction essentially as a creative process of generating a new hypothesis. Abduction and induction, viewed together as processes of production and generalization of new hypotheses, are sometimes called reduction. As Lukasiewicz (1970, p. 7) makes clear, "Reasoning which starts from reasons and looks for consequences is called deduction; that which starts from consequences and looks for reasons is called reduction,"

There are two main epistemological meanings of the word abduction: 1) abduction that only generates plausible hypotheses (selective or creative), and 2) abduction considered as inference to the best explanation, that also evaluates hypotheses. All we can expect of our "selective" abduction, is that it tends to produce hypotheses that have some chance of turning out to be the best explanation. Selective abduction will always produce hypotheses that give at least a partial explanation and therefore have a small amount of initial plausibility. In this respect abduction is more efficacious than the blind generation of hypotheses.

Peirce claimed that all thinking is in signs, and that signs can be icons, indices, or symbols (Thagard and Shelley, 1994). Icons are signs that resemble what they represent. There are instances of abductive thinking that can be interpreted as pictorial. We should remember, as Peirce noted, that abduction plays a role even in relatively simple visual phenomena. Visual abduction, a special form of abduction, occurs when hypotheses are instantly derived from a stored series of previous similar experiences. It covers a mental procedure that tapers into a non-inferential one, and falls into the category called "perception" (Shelley, 1996).

Philosophically, perception is viewed by Peirce as a fast and uncontrolled knowledge-production procedure. Perception, in fact, is a vehicle for the instantaneous retrieval of knowledge that was previously structured in our mind through inferential processes. By perception, knowledge constructions are so instantly reorganized that they become habitual and diffuse and do not need any further testing. Many visual stimuli are ambiguous, yet people are adept at imposing order on them: "We readily form such hypotheses as that an obscurely seen face belongs to a friend of ours, because we can thereby explain what has been observed" (Thagard, 1988, p. 53). This kind of image-based hypothesis formation can be considered as a form of visual abduction.

To illustrate from the field of medical knowledge, the discovery of a new disease and the manifestations it causes can be considered as the result of a creative abductive inference. Therefore, creative abduction deals with the whole field of the growth of scientific knowledge. This is irrelevant in medical diagnosis where instead the task is to select from an encyclopedia of pre-stored diagnostic entities (Ramoni, Stefanelli, Magnani and Barosi, 1992). In case of scientific theory change selective abduction is replaced by creative abduction and there is a set of competing theories instead of diagnostic hypotheses.

Furthermore the language of background scientific knowledge is to be regarded as open: in the case of competing theories, as they are studied by epistemology of theory change, we cannot-contrary to Popper's point of view (1970)-reject a theory merely because it fails occasionally. If it is simpler and explains more significant data than its competitors, a theory can be acceptable as the best explanation.

To achieve the best explanation, it is necessary to have a set of criteria for evaluating the competing explanatory hypotheses reached by creative or selective abduction. Evaluation has a multi-dimensional character. Consilience (Thagard, 1988) can measure how much a hypothesis explains, so it can be used to determine whether one hypothesis explains more of the evidence (for instance, empirical or patient data) than another: thus, it deals with a form of corroboration. In this way a hypothesis is considered more consilient than another if it explains more "important" (as opposed to "trivial") data than the others do. In inferring the best explanation, the aim is not the sheer amount of data explained, but its relative significance. The assessment of relative importance presupposes that an inquirer has a rich background knowledge about the kinds of criteria that concern the data. Simplicity too can be highly relevant when discriminating between competing explanatory hypotheses; it deals with the problem of the level of conceptual complexity of hypotheses when their consiliences are equal. Explanatory criteria are needed because the rejection of a hypothesis requires demonstrating that a competing hypothesis provides a better explanation. The theory of explanatory coherence and the related computational system seem to me the best and most sophisticated ways we now possess to solve this problem. (1)

Clearly, in some cases conclusions are reached according to rational criteria such as consilience or simplicity. Nevertheless, in reasoning to the best explanation, motivational, ethical or pragmatic criteria cannot be discounted. Indeed the context suggests that they are unavoidable: this is especially true in medical reasoning (for instance, in therapy planning), but scientists that must discriminate between competing scientific hypotheses or competing scientific theories are sometimes also conditioned by motivationally biasing their inferences to the best explanation.

Nevertheless, if we consider the epistemological model as an illustration of medical diagnostic reasoning, the modus tollens is very efficacious because of the fixedness of language that expresses the background medical knowledge: a hypothesis that fails can nearly always be rejected immediately.

There are various kinds of conceptual transformations that are involved in scientific creative reasoning-such as anomaly resolution, conceptual combination, analogical and visual thinking, thought experiment, etc. I have dealt with the related different abductive roles elsewhere (Magnani, 1997a).

II. Withdrawing Scientific Hypotheses

In different theoretical changes we witness different kinds of discovery processes operating. Discovery methods are data-driven (generalizations from observation and from experiments), explanation-driven (abductive), and coherence-driven (formed to overwhelm contradictions) (Thagard, 1992). Sometimes there is a mixture of such methods: for example, an hypothesis devoted to overcome a contradiction is found by abduction. Therefore, contradiction and its reconciliation play an important role in philosophy, in scientific theories and in all kinds of problem-solving. It is the driving force underlying change (thesis, antithesis and synthesis) in the Hegelian dialectic and the main tool for advancing knowledge (conjectures and refutations (Popper, 1963, and proofs and counter-examples-Lakatos, 1976) in the Popperian philosophy of science and mathematics.

Following Quine's line of argument against the distinction between necessary and contingent truths (Quine, 1961), when a contradiction (an anomaly) arises, consistency can be restored by rejecting or modifying any assumption which contributes to the derivation of contradiction: no hypothesis is immune from possible alteration. Of course there are epistemological and pragmatic limitations: some hypotheses contribute to the derivation of useful consequences more often than others, and some participate more often in the derivation of contradictions than others. For example it might be useful to abandon, among the hypotheses which lead to contradiction, the one which contributes least to the derivation of useful consequences; if contradictions continue to exist and the assessed utility of the hypotheses changes, it may be necessary to backtrack, reinstate a previously abandoned hypothesis and abandon an alternative instead.

Hence, the derivation of inconsistency contributes to the search for alternative, and possibly new hypotheses: for each assumption which contributes to the derivation of a contradiction there exists at least one alternative new system obtained by abandoning or modifying the assumption.

The classical example of a theoretical system that is opposed by a contradiction is the case in which the report of an empirical observation or experiment contradicts a scientific theory. Whether it is more beneficial to reject the report or the statement of the theory depends on the whole effect on the theoretical system. It is also possible that many alternatives might lead to non-comparable, equally viable, but mutually incompatible, systems. Empirical anomalies result from data that cannot currently be fully explained by a theory. They often derive from predictions that fail, which implies some element of incorrectness in the theory. In general terms, many theoretical constituents may be involved in accounting for a given domain item (anomaly) and hence they are potential points for modification. The detection of these points involves defining which theoretical constituents are employed in the explanation of the anomaly. Thus, the problem is to investigate all the relationships in the explanatory area.

First and foremost, anomaly resolution involves the localization of the problem at hand within one or more constituents of the theory, it is then necessary to produce one or more new hypotheses to account for the anomaly, and, finally, these hypotheses need to be evaluated so as to establish which one best satisfies the criteria for theory justification. Hence, anomalies require a change in the theory, yet once the change is successfully made, anomalies are no longer anomalous but in fact are now resolved. General strategies for anomaly resolution, as well as for producing new ideas and for assessing theories, have been studied by Darden (1991).

Empirical anomalies are not alone in generating impasses. The so-called conceptual problems represent a particular form of anomaly. In addition, resolving conceptual problems may involve satisfactorily answering questions about the nature of theoretical entities. Nevertheless such conceptual problems do not arise directly from data, but from the nature of the claims in the principles or in the hypotheses of the theory. It is far from simple to identify a conceptual problem that requires a resolution, since, for example, a conceptual problem concerns the adequacy or the ambiguity of a theory, and yet also its incompleteness or (lack of) evidence. In Magnani (1997a) I present some examples derived from the historical discovery of non-Euclidean geometries which illustrate the relationships between strategies for anomaly resolution and explanatory and productive visual thinking: the objective is to consider how visual thinking may be relevant to hypothesis formation and scientific discovery and to explore the first epistemological and cognitive features of what I call visual abduction. (2)

As Lakatos argues, in a mature theory with a history of useful consequences, it is generally better to reject an anomalous conflicting report than it is to abandon the theory as a whole. The cases in which we have to abandon a whole theory are very rare: a theory may be considered as a complex information system in which there is a collection of cooperating individual statements some of which are useful and more firmly held than others; propositions that belong to the central core of a theory are more firmly held than those which are located closer to the border, where instead rival hypotheses may coexist as mutually incompatible alternatives.

Accumulating reports of empirical observations can help in deciding in favor of one alternative over another. We have to remember that even without restoring consistency, an inconsistent system can still produce useful information. Of course from the point of view of classical logic we are compelled to derive any conclusion from inconsistent premises, but in practice efficient proof procedures infer only "relevant" conclusions with varying degrees of accessibility, as stated by the criteria of non-classical relevant entailment (Anderson and Belnap, 1975).

We may conclude by asserting that contradiction, far from damaging a system, helps to indicate regions in which it can be changed (and improved).

Contradiction has a preference for strong hypotheses which are more easily falsified than weak ones; and moreover, hard hypotheses may more easily weakened than weak ones, which prove difficult subsequently to strengthen. It is always better to produce mistakes and then correct them than to make no progress at all. Let us now consider a kind of "weak" hypothesis that is hard to negate and the ways for making it easy. In these cases the subject can rationally decide to withdraw his hypotheses even in contexts where it is impossible to find "explicit" contradictions; more than that, thanks to the new information reached simply by finding this kind of negation, the subject is free to form new hypotheses.

There is a kind of negation, studied by researchers into logic programming, which I consider to be very important also from the epistemological point of view: negation as failure. (3) It is active as a "rational" process of withdrawing previously-imagined hypotheses in everyday life, but also in certain subtle kinds of diagnostic and epistemological settings. Contrasted with classical negation, with the double negation of intuitionistic logic, and with the philosophical concept of Aufhebung, negation as failure shows how a subject can decide to withdraw his hypotheses, while maintaining the rationality of his reasoning, in contexts where it is impossible to find contradictions and anomalies.

In (Magnani, 1991 e 1997b) I have explored whether negation as failure can be employed to model hypothesis withdrawal in Poincare's conventionalism of the principles of physics, showing how conventions can be motivationally abandoned.

bluered.gif (1041 bytes)

Notes

(1) Thagard proposes a very interesting computational account of scientific controversies in terms of so-called explanatory coherence (Thagard, 1992), which improves on Lakatos' classic one (1971). Levi's theory of suppositional reasoning is also related to the problem of so-called "belief change" (1996).

(2) In Magnani et al. (1994) I describe the first features of a computational system (VASt) able to model an image-based hypothesis generation (visual or iconic abduction) in common sense reasoning, that I am extending to the field of scientific discovery.

(3) The links between negation as failure, completed data bases (Clark, 1978), and the closed world assumption (Shepherdson, 1984, 1988) have been studied in great detail. A survey can be found in Lloyd (1987).

References

A. Anderson and N. Belnap, 1975, Entailment, Princeton University Press, Princeton.

K. L. Clark, 1978, Negation as failure, in H. Gallaire and J. Minker (eds.), Logic and Data Bases, Plenum, New York, pp. 119-140. (Reprinted in M. L: Ginsberg (1987), pp. 311-325).

L. Darden, 1991, Theory Change in Science: Strategies from Mendelian Genetics. Oxford University Press, Oxford.

J. R. Josephson and S. G. Josephson, Abductive Inference. Computation, Philosophy, Technology, Cambridge University Press, Cambridge.

________, B. Chandrasekaran, J. W. Jr. Smith and M. C. Tanner , 1986, Abduction by classification and assembly. In PSA 1986, vol. 1, Philosophy of Science Association, pp. 458-470.

I. Lakatos, 1971, History of science and its rational reconstructions, in R. Buck and R. S. Cohen (eds.), PSA 1970: In memory of Rudolf Carnap, Reidel, Dordrecht.

_________, 1976, Proofs and Refutations. The Logic of Mathematical Discovery, Cambridge University Press, Cambridge.

P. Langley, P., H.A. Simon, G.L. Bradshaw and J.M. Zytkow, 1987, Scientific Discovery. Computational Explorations of the Creative Processes, The MIT Press, Cambridge, MA.

I. Levi, 1996, For the Sake of the Argument. Ramsey Test Conditionals, Inductive Inference, and Nonmonotonic Reasoning, Cambridge University Press, Cambridge.

J. W. Lloyd, 1987, Foundations of Logic Programming, 2nd edition, Springer, Berlin.

J. Lukasiewicz, 1970, Creative elements in science [1912], in J. Lukasiewicz, Selected Works. North Holland, pp. 12-44.

L. Magnani, 1991, Epistemologia applicata, Marcos y Marcos, Milan.

________, 1992, Abductive reasoning: philosophical and educational perspectives in medicine. In D. A. Evans and V. L. Patel (Eds.), Advanced Models of Cognition in Medical Training and Practice, Berlin, Springer, pp. 21-41.

________, 1997a, Ingegnerie della conoscenza. Introduzione alla filosofia computazionale, Marcos y Marcos, Milan.

________, 1997b, Withdrawing hypotheses by negation as failure, in Essays in honor of Imre Toth, forthcoming.

________, S. Civita and G. Previde Massara, 1994, Visual cognition and cognitive modeling. In V. Cantoni (Ed.), Human and Machine Vision: Analogies and Divergences, New York, Plenum Press, pp. 229-243.

C. S. Peirce, 1931-1958, Collected Papers, 8 vols., C. Hartshorne, P. Weiss and A. Burks (Eds.), Cambridge, MA, Harvard University Press.

I. Peng and I. A. Reggia, 1987a, A probabilistic causal model for diagnostic problem solving I: integrating symbolic causal inference with numeric probabilistic inference, IEEE Transactions on Systems, Man, and Cybernetics, 17, pp. 146-162.

________, 1987b, A probabilistic causal model for diagnostic problem solving II: diagnostic strategy. IEEE Transactions on Systems, Man, and Cybernetics, 17, pp. 395-406.

K. Popper, 1963, Conjectures and Refutations. The Growth of Scientific Knowledge, Routledge and Kegan Paul, London.

K. Popper, 1970, The Logic of Scientific Discovery, Hutchinson, London.

H. E. Pople, 1973, On the mechanization of abductive logic. In Proceedings of the International Joint Conference on Artificial Intelligence 8, pp. 147-152.

W. V. O. Quine, 1951, Two dogmas of empiricism, Philosophical Review, 40, 1951, 113-127. Also in W. V. O .Quine, From a Logical Point of View, Hutchinson, London, 1953, 19612, pp. 20-46.

M. Ramoni, M. Stefanelli, L. Magnani and G. Barosi, 1992, An epistemological framework for medical knowledge-based systems, IEEE Transactions on Systems, Man, and Cybernetics, 22(6), pp. 1361-1375.

J. A. Reggia, S. N. Dana and Y. W. Pearl, 1983, Expert systems based on set covering model, International Journal on Man-Machine Studies, 19, pp. 443-460.

C. Shelley, 1996, Visual abductive reasoning in archaeology, Philosophy of Science, 63(2), pp. 278-301.

J. C. Shepherdson, 1984, Negation as failure: a comparison of Clark's completed data base and Reiter's closed world assumption, Journal of Logic Programming, 1(1), 1984, 51-79.

________, 1988, Negation in logic programming, in J. Minker (ed.), Foundations of Deductive Databases, Morgan Kaufmann, Los Altos, CA, pp. 19-88.

P. Thagard, 1988, Computational Philosophy of Science, Cambridge, MA, The MIT Press.

________, 1992, Conceptual Revolutions, Princeton, NJ, Princeton University Press.

________ and C. Shelley, 1994, Limitations of current formal models of abductive reasoning, Department of Philosophy, University of Waterloo, Ontario, Canada, forthcoming.

bluered.gif (1041 bytes)

 

Back to the Top

20th World Congress of Philosophy Logo

Paideia logo design by Janet L. Olson.
All Rights Reserved

 

Back to the WCP Homepage