20th World Congress of Philosophy Logo

Philosophy of Mind

Animal Belief

Roger Fellows
University of Bradford

bluered.gif (1041 bytes)

ABSTRACT: If Mary believes a bone is on the lawn, then she literally believes that, though her belief may be mistaken. But, if her pet Fido rushes up to what is in fact a bit of bone-shaped plastic, then Fido does not believe that there is a bone on the lawn. However, the best explanation for Fido’s behavior may be that he initially believed there was a bone on the lawn. Unless we are methodological or analytical behaviorists, the claim that we can best explain the behavior of dumb animals by treating them as if they literally held beliefs (and desires) subject to various rationality constraints is hardly surprising. I argue that this instrumentalism does not support the realist view that dumb animals are literally to be credited with beliefs. In particular, I focus on Davidson’s argument that a creature can have beliefs only if it can be the interpreter of the speech of another. Davidson’s argument, which has not won wide acceptance, is the most subtle examination to date of the relation between belief and language. I examine the premises of his argument, indicate two major criticisms, and attempt to defend his conclusion that dumb animals lack beliefs by adducing supporting arguments.

bluered.gif (1041 bytes)

This paper is concerned with the problem of whether non-language-using creatures literally have beliefs, rather than with the question as to whether it is predictively useful to ascribe beliefs to them. The answer to this latter question is plainly in the affirmative. The issue of belief-attribution to dumb animals is a narrow form of a more general problem, the problem of whether dumb animals can literally be credited with thoughts. Still, it is reasonable to focus on the case of belief since it lies, as it were, at the centre of the cognitive domain. The attribution of any intentional state, such as desire, regret, hope and so on, to a creature presupposes the attribution of belief to that creature.


Like many other philosophers, I will kick off with a brief discussion of Descartes’ views which many find wildly implausible. Descartes believed that dumb animals could not be credited with beliefs because he thought they were mindless machines: dumb animals behave as if they feel fear, as if they believe various things, etc., but the truth is that all of the cases where we are inclined to ascribe psychological states to them, can be redescribed solely in terms of internal physiological processes set in motion by mechanical causation. Why not follow the Churchlands in the case of belief-attribution to human beings? Descartes’ reason would be that we humans are the possessors of immaterial minds whose essential property is thought; and, no mind, no thought.

However, the rejection of substantial dualism as a philosophy of mind would still leave Descartes with a claim which a materialist could accept. In a letter to Henry More, Descartes urges that

. . . the word is the sole sign and the only certain mark of the presence of thought hidden and wrapped up in the body; now all men, the most stupid and the most foolish, those even who are deprived of the organs of speech make use of signs, whereas the brutes never do anything of the kind; which may be taken for the true distinction between man and brute.(1)

But Descartes’ reasoning here is unconvincing. Verbs of propositional attitude such as the verb ‘to believe’ are logically intensional in the sense that they generate referentially-opaque occurrences of singular terms and extensionally-opaque occurrences of predicates, in sentences which can occupy the gap in (e.g.) the sentence frame: NP + VPA + that + ( ). I (wholly unoriginally) believe that mental states such as belief are described by sentences which are logically intensional. (2) Consider the following case. I enter my dark house at night. My dog Fido rushes towards me growling. I turn on the light and Fido’s growls turn to whimpers of (joy?). The most natural way of describing this state of affairs is to say that Fido behaved aggressively towards me initially because he did not realise that I was identical with his master and friend. I say that this is the most natural way of describing what is going on here, and it is open for a defender of Descartes to respond by pointing out that, although our descriptions of Fido’s behaviour are intensional, we are not bound to employ them. But this response leans on the ontology of substantial dualism without illuminating the connections between mind and language.


The strongest argument in support of the claim that dumb animals should literally be credited with beliefs then is that we rationalise their behaviour by the ascription to them of beliefs and desires bounded by various rationality constraints. Intentionalist explanations fare better as predictors of animal behaviour than alternative explanatory models such as operant conditioning. This is an empirical claim, which can of course be denied. Some psychologists, for instance, Skinner, have even denied that the explanation of human action is ineliminably intentionalist. (3) Few analytic philosophers, however, take this very seriously. (4) However, that we do attribute beliefs and desires to non-language-using creatures does not come close to clinching the case for animal belief. Dennett has argued that intentional explanation is a better predictor of the moves of a chess-playing computer than explanation from either the design or the physical stance. (5) We may, then, ascribe beliefs and desires to dumb animals with equanimity because, by symmetry with the chess-playing computer, our ascriptions are purely instrumental. A possible response here is to ask why, in that case, we should be realist in our ascriptions of beliefs and desires to ourselves? This returns us to the connection between language and belief. Following Donald Davidson, I will argue that languageless creatures cannot have beliefs (which is not to deny that they are the possessors of information-processing sub-doxastic states).


Consider the following principle: A creature can properly have the belief that P attributed to it, only if it possesses those concepts mastery of which are required in order to have the belief that P. This principle could be tightened in various ways, but it does not obviously beg a central question, since it does not define possession of a concept in terms of grasping the actual and possible extension of a certain predicate in a language. The principle is plausible in its application to human beings: I cannot believe that a certain object is a bone unless I have the concept of ‘bone’ under which the object in question is subsumed. However, given that I possess the concept, I might well misapply it. Suppose someone leaves a bone-shaped bit of plastic on my lawn. On noticing it, I come to believe that the object is a bone. Fido (remember him?) rushes up to the bit of plastic, takes a bite, drops it and retires. I inspect it, notice that it is plastic and my belief that the object is a bone is cancelled.

Did Fido and I share the original belief that there was a bone on the lawn? No, because my false belief made sense against a background of true beliefs. (6) This network of associated beliefs locates the point in semantic space at which the belief that the object is a bone lies. Fido cannot believe anything about the concept ‘Plastic’, nor about the concept ‘Calcium’, and so on. The suggestion then is that we need a language which fixes a network between concepts (nodes in the network), and provides for connections (of a deductive and inductive kind) between the nodes. How could a dumb animal have the belief that there is a bone on the lawn given that it lacks a language? And when Fido bit the phoney bone, did he come to believe that it was not made of calcium?

This line of thought has been criticised on two fronts. The first is that humans employ concepts intelligently without being able to relate them to related concepts in the network. (7) However, we need not suppose that a language-speaker knows all the properties of bones; and it would be idle to speculate on just how many properties of a bone the speaker of a language would need to know in order to have the concept of a bone. But clearly, for communication about bones between two or more speakers of a language to be possible, they must have some common knowledge about bones. The second is to argue that dumb animals possess only simple and not complex concepts. (8) But, rather than go into this objection, here, I will turn to Davidson’s argument against the possibility of animal belief. In my opinion, Davidson’s work provides the most sophisticated account to date of the relation between belief and language.


First, I shall outline briefly Davidson’s argument. (9) Second, I shall mention two objections to it, and, finally, in V, I will try to defend it. I am not certain whether my defence provides independent argumentation for the two premises in Davidson’s argument, or merely complements or rearranges his own thinking; but whichever is the case, I believe that there is here a powerful line of thought against the possibility of dumb animals being able to have beliefs.

Davidson’s argument9 that dumb animals lack thought rests upon two premises. The first is that, if a creature x has the concept of belief, then it is a language user:

(a) CB (x) --> LU (x)

The second is that, if x has beliefs, then it possesses the concept of belief:

(b) B (x) --> CB (x).

Clearly, a valid consequence of (a) and (b) is:

(c) 7LU (x) --> 7B (x).

Davidson defends (a) by reference to his account of radical interpretation. (10) In order to translate the language of a newly-discovered people, we consider the set of ordered pairs whose first member is an uninterpreted utterance of the language, and whose second member is the set of circumstances in which the utterance is usually made. This provides us with an evidential base for a theory of meaning for the language under investigation. But Davidson argues that the pairing of utterances with publically observable goings-on requires that we ascribe beliefs to members of the speech community. There is an apparent difficulty here, because it would seem that, in order to get at the meanings of the utterances of members of the community, we must attribute to them certain beliefs, and, in order to know what they believe, we must have succeeded in interpreting their utterances.

Davidson argues incisively that the only way out of this circle is to hold belief constant, and solve for meaning. Now this is the crudest summary of Davidson’s views, but it does bring out the crucial point that, on this account, a translator of language L must have beliefs about the beliefs of the speakers of L. And to have beliefs about beliefs is to possess the concept of belief.

The trouble is that this argument appears to support not (a), but (a*):

(a*) LU (x) --> CB (x)

And, although this converse entailment is plausible, (c) does not follow from (a*) and (b). (11)

Davidson’s support for (b) is so short that it can be quoted in full:

Can a creature have a belief if it does not have the concept of belief? It seems to me that it cannot, and for this reason. Someone cannot have a belief unless he understands the possibility of being mistaken, and this requires grasping the contrast between truth and error — true belief and false belief. But this contrast, I have argued, can emerge only in the context of interpretation, which alone forces us to the idea of an objective public truth. (12)

On the face of it, this line of thought seems to me to be resistible. Let us grant that dumb animals do not grasp the contrast between truth and error. Why does a dumb animal need to recognise that it has beliefs which may turn out to be false (i.e. to possess the concept of belief), in order to have beliefs? Otherwise expressed, why must a creature which may be credited with beliefs have the capacity to monitor reflexively its system of beliefs, when changes in the external world causally cancel, modify or create new "beliefs" without reflection?


In the last section of the paper I will sketch arguments in support of (a) and (b) above.

(a) Suppose CB (x). Then:

(i) x knows that beliefs aim at truth.

(ii) If x knows that beliefs aim at truth then x knows the difference between true belief and false belief, and hence the difference between truth and falsity.

(iii) x must be able to tell the difference between changes in belief and changes in the external world, for, without this capacity, x could not form the conception of an objective world; and, without this conception, there could be no difference between truth and falsity for x.

(iv) A language L enables x to distinguish changes in belief from changes in the world. The that-clauses in L, which fix the contents of x’s beliefs, have (relatively) stable meanings, which enable x to determine the congruence or lack of congruence between x’s own beliefs and the beliefs of others, on the one hand, and the world itself, on the other.

In summary, a creature which possesses the concept of belief can distinguish between true and false belief. This requirement in turn rests upon a capacity to distinguish changes in the world from changes in mind. Language-learning, which is community-based, provides for the fixation of belief. (Here, what is crucial, is learning and training, features which are conspicuously lacking in the signalling systems of dumb animals).

Turning to (b), let us suppose B (x): then x must possess those concepts necessary to sustain x’s beliefs (see above). Concepts are one and all counterfactual. I wish to deny that there are any simple ostensive concepts. (13) The argument is as follows. Suppose that there were a possible world in which everything which was red was round and vice versa. Then, although the actual extensions of ‘red’ and ‘round’ would be co-extensive, they would still be different concepts. This is what Peirce meant when he said that our concepts (intellectual conceptions) relate to what might be, rather than to what merely is. (14) So if a creature is to have the belief that there is a bone in front of it, it must possess the concept of a bone. And if it possesses this concept, it knows not only the actual extension of ‘bone’, but also which possible objects fall within its extension or are excluded from it. If a creature failed to be able to distinguish the actual from the possible extension of a concept, or to be able to think about what properties an object would need in order to fall under a concept, then, whatever might be said about the creature, it would not be a creature which possessed our concepts. So it could not share our beliefs.

What is it to reason counterfactually? Many of our beliefs are explicitly counterfactual in nature; and our arts and sciences would be unimaginable without counterfactual thought. But it will not do to say that dumb animals possess merely ostensive beliefs, whereas we language-users are capable of counterfactual beliefs, since we have just noticed that the attribution of any concept to a creature x implicates that creature in a capacity for counterfactual thought.

An answer due to Ramsey is this: to determine whether a counterfactual belief of the form if A were to be the case then B would be the case is true, add A to your existing belief set C. If C thereby becomes inconsistent, minimally revise C in order to accommodate A consistently. Finally, verify that B is a (semantic) consequence of C. (15)

Ramsey’s answer, which seems to me to be along the right lines, requires that a creature which possesses beliefs possesses the concept of truth (in terms of which consistency and inconsistency are defined); and if a creature possesses the concept of truth, then it possesses the concept of belief. I side with Donald Davidson in concluding that dumb animals lack beliefs.

bluered.gif (1041 bytes)


(1) Letter (to Morus). Feb. 1649: AT V, 278; Descartes Selections (ed. R.M. Eaton), p 360.

(2) See, e.g. R. Chisholm, ‘Sentences about believing’, in Minnesota Studies in the Philosophy of Science, Vol.II, 1958, 510 - 520.

(3) See, e.g. B.F. Skinner, Beyond Freedom and Dignity, Ch. 1, Jonathan Cape, London, 1972

(4) See, e.g. G. Harman, Thought, Ch. 3. Princeton University Press, Princeton, New Jersey 1973.

(5) See, D.C. Dennett, ‘Intentional Systems’, the Journal of Philosophy, LXVIII, 4, (1971): 87 - 106.

(6) See, D. Davidson, ‘On the Very Idea of a Conceptual Scheme’, presidential address to the Eastern Meeting of the American Philosophical Association, Atlanta, 28 December, 1973.

(7) See, G. Graham, Philsophy of Mind, Oxford, 1993, Ch. 4. Graham characterises what he takes to be Davidson’s argument against animal belief thus:

(1) A creature can have a belief only if the belief is positioned in a network of beliefs.

(2) Animals lack belief networks.

(3) Therefore, animals lack beliefs.

As I hope will be apparent, this is an oversimplification of Davidson’s own argument. But Graham rejects the second premise of the ‘Network Argument’. He argues that a dog may believe that a cat has run up a tree although he has no beliefs about soil, water, or whether the tree in question has leaves or needles etc. The dog, Graham asserts, conceives of the tree with own stock of concepts which may be dissimilar from our own. Graham grants that human concepts and beliefs may be embedded in complex networks, but finds it unreasonable to require the same of the beliefs of animals. Graham begs the question at issue. He adduces only two reasons for denying premise (2). The first is that the ascription of beliefs to animals best explains their behaviour which hardly anyone denies. The second is that two individual people may share the same belief (Graham’s example is of a lay person and a musicologist who both believe that Horowitz was a better classical pianist than Rubinstein). The musicologist will have a stock of concepts which the layman does not. They share the same belief, but the relational concept ‘x is a better classical pianist than y’ is not embedded in identical networks. But this example only illustrates the unremarkable point that expert knowledge enlarges a person’s stock of concepts. In order for them both to recognise that they share this belief, they must, as Davidson has urged, have a multitude of beliefs in common.

(8) See B. Williams, ‘Deciding to Believe’ in Language, Belief, and Metaphysics, Edited by H.E. Kiefer and M.K. Munitz, State University of New York Press, Albany 1970. Williams makes the point that, in order to have any confidence in the claim that dumb animals have simple concepts, we would need a complexity criterion for concepts. He does not attempt the task, and I think that the construction of such a metric would raise much the same difficulties as are encountered in trying to define the simplicity of scientific theories.

(9) See D. Davidson, ‘Thought and Talk’ in Mind and Language, Edited by S. Guttenplan, Oxford University Press 1975.

(10) See D. Davidson, ‘Radical Interpretation’ in Dialectica, 27 (1973), 313 - 28.

(11) See J. Bishop, ‘More Thought on Thought and Talk’ in Mind, LXXIV, 1 - 16 1980.

(12) D. Davidson, Thought and Talk’

(13) See B. Aune, Rationalism, Empiricism, and Pragmatism: An Introduction, Ch. 5, Random House, New York 1970

(14) C.S. Peirce, Collected Papers, Edited by C. Hartshorne, and P. Weiss, Vol. 5, Cambridge, Mass: Harvard University Press 1934, secs. 5.469, 5.470, 5.492.

(15) See F. Ramsey, The Foundations of Mathematics, Routledge and Kegan Paul, London, (1931) p. 247. The Ramsey account has progressed from syntactic to semantic characterisations, in the hands of philosophers such as R. Stalnaker and D. Lewis. But in this paper I think that there is no need to worry about the detailed explications of counterfactual thought; it is sufficient to recognise that our best accounts of counterfactual belief on a formal or informal level follow the spirit of the Peirce/Ramsey approach.

bluered.gif (1041 bytes)


Back to the Top

20th World Congress of Philosophy Logo

Paideia logo design by Janet L. Olson.
All Rights Reserved


Back to the WCP Homepage