paideia3.gif (8894 bytes)

Proceedings of the Twentieth World Congress of Philosophy

Volume 6: Analytical Philosophy and Logic

Introduction

Akihiro Kanamori

Analytic philosophy, a dominant tradition of twentieth-century philosophy, can be informatively cast as the outgrowth of the investigations of logic and language of Gottlob Frege, Bertrand Russell, and Ludwig Wittgenstein, and in the next generation, of Rudolf Carnap and W.V. Quine. As such, it is a specific historical development, one that featured subtle dialectical interactions among its propounders, interactions that have been reflected or reenacted in later developments. Whatever its heritage, contemporary analytic philosophy continues to use investigations of language and thought to get at fundamental issues at the heart of philosophy: truth, meaning, and knowledge.

Frege, the greatest philosopher of logic since Aristotle, developed quantificational logic in his Begriffsschrift (1879), thereby revolutionizing the study of deductive inference. Frege stimulated analytic philosophy generally by developing a formal framework and infusing it with an incisive non-mental and non-psychological account of language (of which more below). Russell, through his great influence, was the veritable architect of analytic philosophy. His rejection of Hegelian idealism was a pivotal turning point in the history of philosophy, and his subsequent development of logical theory a major intellectual accomplishment. It was Russell who made the analysis of the general structures of language and thought central to philosophy.

The larger efforts of both Frege and Russell were directed at the founding of mathematics on logic, and this Frege-Russell logicism would both inform analytic philosophy as well as stimulate the development of a new field of mathematics. First-order logic is the restriction of the Frege-Russell logic that only allows quantification over individuals (not properties, etc.). After first-order logic emerged in the investigations of David Hilbert, the preeminent mathematician at the turn of this century, seminal work of Kurt Gödel and also of Alfred Tarski led to the development of the new and now sophisticated field of mathematical logic. As a branch of mathematics, modern mathematical logic leaves questions of truth and meaning alone at the basic level and proceeds to develop deep mathematical results about recursively generated formal frameworks. It is the analytic tradition in philosophy that is involved in getting at truth and meaning at the basic level through the analysis of language.

Wittgenstein rejected the claims made by Russell and Frege on behalf of logicism and more generally on behalf of the substance and priority of their logic. His Tractatus Logico-Philosophicus (1922) offered the view that the formal propositions of the new logic are empty of content, not genuine truths but tautologies. For Wittgenstein, the impossibility of a coherent standpoint outside of logic precludes the possibility that logical forms can be stated-they can only be "shown." In his later work he further advocated the abandonment of the contention that notions like truth and meaning can be generally and adequately secured by logico-philosophical analysis. Bringing into play a fundamentally dynamic view of language and its interaction with the world, he urged us to reflect on our own involvements and interactions in communal "language games." Wittgenstein was decidedly against theorizing in philosophy, general accounts being, according to him, nonsensical metaphysics. While his views about the public and social nature of language would have a profound effect on philosophy as evidenced by the direction of papers in this volume, his anti-theorizing stance would leave him on the periphery of various of its more academic developments.

In the next generation Carnap, inspired by Frege, Russell, Wittgenstein, and Gödel, took the new developments in logical analysis to provide the means for transforming philosophy from a muddled field of unbridled metaphysical speculation to a delimited, genuinely scientific inquiry. On the one hand, Carnap stressed that there is no one correct formal language or linguistic framework, so that various sorts of phenomenological, physicalistic, or other bases for languages can be "tolerated." On the other hand, once a particular framework is adopted, basic notions and truths are fixed, albeit via convention or linguistic stipulation. Logic and mathematics, as formal or "analytic" disciplines, are constituted by such frameworks and are therefore devoid of any independent factual or empirical content. Logical and mathematical truths are truths by virtue of linguistic meaning alone, and are in that sense-and only in that sense-a priori. Traditional philosophical disputes about necessary truths were thus regarded by Carnap as emerging through choices of different frameworks, not through genuine argument. Genuine knowledge, scientific knowledge, emerges only through empirical investigation of the world.

Carnap's views exemplified those of the so-called Vienna Circle, of which he was a prominent member, and later the wider movement known as logical positivism. This movement, committed to developing a truly scientific philosophy, held that all truths are either purely linguistic (analytic) or else empirical (verifiable) in character, and that traditional a priori metaphysical speculation was meaningless because unverifiable. Logical positivism was most prominent in the 1930s and later, in America, through the 1950s, and is still sometimes identified with analytic philosophy, though the analytic tradition, as we have seen, has various metaphysical, i.e., anti-positivistic, roots.

W. V. Quine, though a student of Carnap's, famously moved against the a priori, attacking Carnap's resurrection of the Kantian analytic/synthetic distinction. For Quine philosophy is continuous with science; there is no clarifying demarcation that can be made between the two disciplines. All knowledge is part of science, loosely construed, but there are no a priori starting points in stipulated linguistic frameworks and no ways of independently verifying empirical truths by means of uninterpreted observations. Science is to be worked out from within science; Quine naturalizes epistemology and even philosophy itself. He considers first-order logic as a useful notation, but also as a science like any other. And he takes Tarski's account of truth definitions for languages formalized in first-order logic to be all that need be articulated in general about the notion of truth.

Quine does not wish to eliminate metaphysics; the best scientific theories entail commitments to both physical objects and to abstract sets. In terms of the traditional intension vs. extension distinction (where loosely speaking the extension of a term is the collection of things that the term is true of, and the intension is some more intrinsic sense of the term) Quine espouses the clarity and workability of extensionalism, the exclusive reliance on extensions. For Quine a substantial, intensional theory of meaning is simply too unworkable and unclear, and unnecessary. Ironically, Quine's openness to metaphysics has opened the door to newer metaphysics of an intensional kind. Even so, Quine's views have had an immense impact on the practice of subsequent analytic philosophy.

As the papers in this volume attest, analytic philosophy remains a vital philosophical tradition, still centrally preoccupied with the analysis of language and thought. The papers in the first half of the volume contribute to the project of developing a general theory of meaning, a continuation of the original Fregean aspiration to get at the structure of thought through the basic structures of language. These papers are generally more detailed and narrower in focus than the rest. They are themselves arranged according to breadth of category, in a loose progression from the broader subjects toward the finer analyses, and with attention to bridging thematic connections. To set the stage, the main features of Frege's analysis of language are quickly reviewed:

For Frege there are two dimensions of meaning, sense (Sinn) and reference (Bedeutung). These dimensions are attributed at the basic level to proper names and their objects. In contrast to the sentence 'The Evening Star is the Evening Star', the sentence 'The Evening Star is the Morning Star' has "cognitive value." Two different proper names are being equated; although they have different senses, they have the same reference, the planet Venus. The truth of the sentence is thus explained by sameness of reference but its cognitive value by the difference of sense, which Frege also called the "mode of presentation" of the object.

Frege sharply distinguished the 'is' of identity from the 'is' of predication. In the sentence 'The Morning Star is a planet' the 'is' does not serve to equate at all, but is part of the predicate 'is a planet'. Such ambiguities of natural language he was able to analyze out with his Begriffsschrift, through the introduction of formal notation for predication and quantification. The reference of a predicate for Frege is a concept (Begriff), and Frege took concepts to be like mathematical functions, "unsaturated" until completed by an input, an object. Concepts map objects to truth values, according to Frege. For example, 'is a planet' refers to a concept which can be completed by the object Venus, and when completed, maps it to the truth value "True".

In its completed form, the concept figures in a sentence (Satz), a completed unit of meaning. Frege insisted that the sentence takes priority over its parts in the order of analysis. Concepts emerge through examining the systematic logical contribution that a predicate makes to an entire of system of sentences. We begin by taking logical relations among sentences for granted and then work our way into a conception of their working parts by focusing on the inferential relationships among them.

Frege's Satz is actually ambiguous between the declarative sentence uttered and the abstract proposition expressed. But usually, it is weighed toward the latter, the (Fregean) proposition. The sense (Sinn) of a proposition for Frege is the thought (Gedanke) that it expresses. According to Frege, a thought is objective in being graspable by different individuals and independent of human psychological activity. The reference (Bedeutung) of a proposition, remarkably and most controversially, is a truth value, either "True" or "False," to be worked out in terms of the component saturated concepts and logical structure.

Frege used his Sinn vs. Bedeutung distinction to account for an important shift in indirect discourse and in propositional attitude contexts, when words are quoted or otherwise being discussed "obliquely." In 'Copernicus said that the Morning Star is not the Evening Star' or 'Copernicus believes that the Evening Star is not a planet' the subordinate that-clause is itself expressing a thought or a proposition but does not contribute directly to the truth valuation of the sentence. Frege argued that the reference for the inner proposition cannot be its "customary" reference, a truth value, but is its "indirect" reference, which agrees with its sense. The content of Copernicus's statement or belief-and hence of the truth value of the whole sentence-does not depend upon the truth value of the inner proposition at all; Copernicus could truly be said to believe a falsehood. Nowadays, the Fregean proposition is often taken to be nominalized by a that-clause, is more or less identified with Frege's Sinn or Gedanke for the proposition, and is said to express the truth condition of a sentence.

Stephen Schiffer's paper serves as an appropriate beginning for this volume in that it tackles a central part of Frege's theory and moreover sets the stage with its wide focus, touching on various issues raised in later papers. Schiffer first reviews "generic Fregeanism," the core commitment of Frege's theory to mind-independent, language-independent propositions and their functional dependence on concepts. Schiffer then points to the lack of an adequate account of what concepts are as an evident deficiency of the Fregean theory. How does one account for concepts and propositions-"creatures of darkness" for Quine-and our ability to know and to refer to them?

Schiffer proposes "pleonastic Fregeanism" as an adequate theory of reference for concepts and propositions. He takes 'that S is true' to be the pleonastic equivalent of the sentence 'S', "pleonastic" since this is a "something-from-nothing" transformation. The 'that S' here is as in Frege's theory for subordinate clauses described above: It refers to the proposition that S and is more or less identified with its Fregean indirect reference, its sense. Schiffer's basic equivalence "that S is true if and only S" has the look of the well-known Tarski truth schema (of which more when we discuss theories of truth below), but his emphasis is on a direct equivalence between two different propositions. As Schiffer acknowledges, Frege would have rejected such an equivalence. However, in Schiffer's minimalist approach all that is required of propositions is that "we be party to our that-clause-involving linguistic and conceptual practices." He argues: "There is nothing more to the nature of propositions than can be read off [those practices]."

Schiffer analyzes pleonastic propositions in terms of "pleonastic concepts." He takes these concepts to be the references of expressions occurring in that-clauses, even for proper names. In 'Ralph believes that George Eliot wrote Middlemarch' the reference for 'George Eliot' is to be an "object-dependent concept," partly individuated (set apart) by George Eliot but also involving a broader concept that subsumes the concept of her having once existed. For Frege the (customary) references for proper names were objects, but their (indirect) references in oblique contexts were their senses. Schiffer's approach can be viewed as providing a more refined account of the indirect references of proper names. It can be argued that by this account Frege's well-known "the concept horse" conundrum becomes deflated in a relaxation of the distinction between concept and object.

For Schiffer the larger point is that operationally speaking it is our criteria for evaluating belief reports that come first; it is according to these criteria that propositions are to be individuated; and then concepts are to be individuated in terms of these. It is by this means that those "creatures of darkness" are to be domesticated. Schiffer maintains that his pleonastic Fregeanism has consequences for a range of philosophical problems but also sees that it raises new issues, issues which he intends to address in a forthcoming book.

João Branquinho focuses on the individuation of Fregean propositions, taking this notion for granted. Generally speaking, how Fregean propositions are to be individuated is a crucial part of their analysis; the whole Fregean picture is based on the espousal of propositions and thoughts, things that can be held constant through the vicissitudes of human discourse, and what propositions are turns on how they are to be identified and set apart.

Branquinho takes as his starting point the principle labeled by Gareth Evans the "Intuitive Criterion of Difference of Thoughts," which is stated symbolically and paraphrased as: "propositions are identical only if, necessarily, every attitude that a rational subject, who grasps them, takes at a given time to one is an attitude she takes at that time to the other." The main elements at play here are the modality of necessity, the attribution of rationality, and the focus on propositional attitudes (e.g., belief). This principle is evidently restricted in two directions: It does not address the identity of propositions entertained by different subjects, nor does it address the identity of propositions entertained at different times. Branquinho argues that the latter restriction can be relaxed, so that the synchronic principle can be extended to a diachronic one.

Branquinho's extension is grounded on the idea that propositional attitudes may persist over time. Taking for objects of retention mental particulars rather than mental types, he first delineates "token attitudes," "concrete mental states in which subjects may be for certain periods, states that have particular propositions as their contents and belong to certain types." This he does by making explicit five assumptions about token attitudes and attitude types in terms of a 5-place relation. Then in terms of a further binary relation of "antagonism" of attitude types and a 6-place relation of retention of token attitudes, he states his diachronic extension symbolically and paraphrases it as: "a sufficient condition for propositions to be distinct is that it is possible for a rational subject to take, at (possibly) different times, conflicting attitudes toward them provided that she retains at later times the attitudes previously held."

Branquinho in effect fixes propositions in terms of other notions for which he posits a constancy of presentation. However, attitude retention requires attitude content, so that a principle expressing content in terms of retention would seem to be circular. To this Branquinho replies that his principle is not an absolute criterion for individuating propositions, but rather one relative to some already available theory of content.

Philip Peterson also deals with individuation, but in a different context, one which had been established by his book Fact Proposition Event (1997). Russell and Wittgenstein had early on distinguished facts and propositions, and there is a long tradition of analyzing facts, propositions, and events in the interplay of language and reality. Peterson developed his elaborate "FPE theory" in part to reject the identification of facts with true propositions. The theory is based on an empirical investigation of linguistic practice, as in the "ordinary language" philosophy of John Austin, but also assimilates the linguistic analysis of Noam Chomsky. The central premise is that through these means one can get at genuine epistemological insight about the structure of knowledge and meaning. Peterson proposed in his book that fact, proposition, and event are basic categories that are universal and so, innately possessed. He distinguished predicates in natural language as "factive," "propositional," and "eventive" according to the extent that they are to be held constant through certain syntactic and semantic tests, and these predicates accept clauses that refer to facts, propositions, and events respectively. For example, 'knows' is a factive predicate, so that in 'Copernicus knows that Venus is a planet' the that-clause refers to a fact. On the other hand, 'believes' is a propositional predicate, so that in 'Copernicus believes that Venus is a planet', the very same that-clause now refers to a proposition. Peterson's notion of proposition is more restrictive than Frege's, and Peterson's facts and events introduce finer modes of mediation with reality.

Peterson in the paper discusses individuation first with respect to correspondences among facts, propositions, and events, and then within each category. For him a fact makes some proposition true. He suggests that the correspondence of facts to propositions is mostly plausibly one-to-two, that one fact corresponds to two propositions, presumably the one made true by the fact and the one made false by it. On the other hand, Peterson generally takes the correspondence of events to facts to be one-to-many in a robust sense, and provides examples.

Individuation within each category is to be according to the Leibnizian principle, that two entities are identical exactly when they share the same "relevant" properties. For events these properties are those given by eventive predicates, which are extensional (determined by their extensions). This is to be an expansion of the idea that two events are identical exactly when they have all the same causes and effects. "Fact and proposition individuation is not as straightforward as event individuation," because factive and propositional predicates may create "logically opaque" positions for the that-clause and so do not express properties relevant to the application of the Leibnizian principle. "Matters are worse for propositions," Peterson writes, but "the seeming intractability of proposition individuation may be overcome by embedding the problem in an explanation of fact cognition."

Peterson was influenced by Quine, as evidenced particularly by the emphasis on the efficacy of extensional predicates for individuation. Moreover, Quine's "eternal sentence" comes closest to Peterson's "proposition." However, although Quine argued against developing any intensional theory of meaning, Peterson's is at base a metaphysics with avowedly intensional features.

With the next several papers there is a shift to the analysis of reference for terms, both singular (like 'I' and 'Hesperus') and general (like 'Greeks' and 'gold'). The progression starts with an analysis along Fregean lines, gets to the finer issues of vacuity and synonymy, and proceeds to accounts of vagueness and shifts in linguistic meaning.

Manuel García-Carpintero discusses context-dependent terms, focusing on indexicals, and supports a "token-reflexive" analysis. Indexicals are the most evidently context-dependent of terms; in 'I am here now', the 'I', 'here', and 'now' are indexicals. Hans Reichenbach, in his Elements of Symbolic Logic (1947), proposed the token-reflexive theory of context-dependence, which holds that context-dependence forces referring terms to be concrete tokens (instances), rather than the types they instantiate. In 'you and you have to work together' there are two tokens of the type 'you', and they would presumably have different references. Types have associated linguistic rules for assigning referents to their various tokens, and a token is therefore involved and thus "reflected" in the determination of its contribution to truth conditions.

The token-reflexive approach has an apparent difficulty exemplified by 'p; therefore p'; this fundamental inference may fail without both tokens of p having the same reference, and thus no reasonable "logic of indexicality" can be developed. With such concerns in mind David Kaplan proposed a theory that posits abstract "expressions-in-context" which for a given type of indexical groups several tokens together as having the same "content". On previous occasions García-Carpintero had argued against such theories on two scores: First, any "logic of indexicality" can only be developed, in any case, without formal guarantees (e.g., of tautologies) when different expressions have the same reference; and second, the various arguments lodged against the token-reflexive view can also, upon deeper analysis, be directed against these abstract theories.

García-Carpintero defends the token-reflexive account as one allowing for a semantics for ordinary contexts as well as for indirect discourse. Recall again that for Frege, in indirect discourse the reference for a proper name becomes its sense, its concept or mode of presentation. García-Carpintero argues that for analyzing indexicals the Fregean sense serves well and reinforces the token-reflexive approach: It is individuative, is based on conventional linguistic rules, and is "epistemically diaphanous" to competent speakers, i.e., they naturally associate senses with expressions, and it is no epistemic achievement for them to do so.

García-Carpintero proceeds to advocate the Fregean token-reflexive sense as the best way to accommodate the interaction of intensionality with context-dependent expressions. He first expands on a notion of "semantic presupposition" for the attribution of token-reflexive senses to indexicals. These presuppositions are propositions taken for granted as part of linguistic knowledge, and he describes how senses interact directly with such propositions. He then proceeds to support a "hidden-indexical" account of reference for proper names in indirect discourse, an account that goes beyond Frege's in attributing wider aspects of contextual reference. Through these means García-Carpintero provides an explanation of why indexicals occurring inside attitude ascriptions cannot be generally replaced by co-referential terms. He concludes by arguing for the individuative and explanatory power of his account of indexicals over Kaplan's, and in some respects over Frege's (and apparently Wittgenstein's), who regarded the circumstances accompanying the expression of a thought involving indexicals as literally part of that expression.

Mark Sainsbury discusses empty (or vacuous) names, casting the discussion as one developing an analogy: If the meaning of a sentence is its truth condition, what it would be for the sentence to be true, then the meaning of a name should be its reference condition, what it would be for the name to refer. The usual approach of specifying the bearer for a name is more analogous to specifying the truth value for a sentence (this harkens back to Frege's reference for names and propositions); Sainsbury argues instead for a truth-conditional semantics approach to names (this recalls Frege's sense and is consonant with Schiffer's pleonastic concept).

Sainsbury endorses the approach of Tyler Burge to reference conditions. The starting point, in an instance, is: The reference of 'Hesperus' is Hesperus. Burge would formulate this as: For all x, 'Hesperus' refers to x exactly when x is Hesperus. In a paradigm for analytic philosophy, Russell in his famous essay "On Denoting" (1905) eliminated "denoting concepts" by converting their expressions from definite descriptions to existence assertions. Schematically, he transformed 'The A is B' to: $x (Ax & "y(Ay ®y = x) & Bx). In classical logic, for any name a one has the formal theorem $x(x = a) by existential generalization, and so Burge's formulation implies Russell's (with A = reference of 'Hesperus' and B = Hesperus). The point is to proceed instead in a system of "free logic" in which such conclusions cannot be drawn; several such logics have been developed, and they do not posit a non-empty universe of discourse. Loosely speaking, Burge's formulation remains an "at most one" formulation, in contrast to the "exactly one" formulation of Russell.

Sainsbury goes on to support the intelligibility of empty names from the point of view of a "semantic theorist" immersing himself in the name-using practices of his subjects. Rather than specialist knowledge or the necessity of having referents in reality, Sainsbury emphasizes induction into linguistic practices. He regards cases of unresolved existence like 'Homer' to be particularly striking. Bearer-specifying or descriptivist semantics, like Russell's, would have to focus on actual existence, whereas the reference condition semantics of the sort that Sainsbury is advocating can focus on stipulations governing the use of a name.

Sainsbury then raises a problem: Reference conditions for an empty name introduces an individual concept which is non-descriptive yet true of nothing. But what is there other than the world and its descriptions? Sainsbury argues about reference to the world that the sense or meaning of a proper name should be "individuative," with at most one thing answering to it. Proceeding through a modal, possible-worlds argument he concludes that a name which is empty is necessarily so. As for descriptive accounts with empty names, Sainsbury argues for the truth of their utterances under "observationally similar circumstances," again invoking name-using practice. He does write at the end: "Much else remains to be done in order to provide a full justification for the reference condition view."

The next two papers have their starting points in essays of Hilary Putnam, an important philosopher for analytic philosophy. Putnam's philosophy has gone through several phases, all featuring engagement with forms of realism. He early on criticized the verificationism and conventionism of logical positivism, affirming that realism is the only philosophy that does not render the success of science a miracle. As a new approach to the philosophy of mind Putnam proposed "functionalism," which argued for the autonomy of mind as resting on functional organization along the lines of computer architecture, with mental states being essentially computational states. In his well-known essay "The Meaning of 'Meaning'" (1975), a bridging essay toward a new phase in his philosophy, Putnam attacked the basic presumptions for general terms that intension determines extension and intensions are determined by 'psychological states'. He argued with thought experiments, the best known of which involved Twin Earth, which is exactly like earth except that what goes for 'water' is not H20 but a different substance. In the late 1970s Putnam rejected elements of his own and others' realism which he termed 'metaphysical', and began to advocate a more pragmatic realism which he termed 'internal'. He charted a new, subtle course, one still realist and anti-sceptical, that denies the possibility of the objective representation of reality independent of specific descriptive contexts and advocates a perspective internal to language and based on linguistic practice.

Roger Wertheimer presents a wide-ranging discussion of the synonymy and substitutivity of terms. From Frege's 'Evening Star' vs. 'Morning Star' and Russell's 'Scott' vs. 'author of Waverley' as well as earlier analyses of Locke, issues of synonymy and substitutivity have been focal for the study of meaning since they bring out the roles of sense and intensionality, reference and extensionality. Synonymous terms are seemingly substitutable for each other in a sentence without affecting its meaning. Replacing each instance of 'Greeks' by 'Hellenes' in 'Greeks are Greeks' yields 'Hellenes are Hellenes'. But what about intercepting or non-uniform substitution, as in 'Greeks are Hellenes'? Frege had stressed that such sentences have "cognitive value." Wertheimer argues for interception nonsynonymy, that intercepting substitution does not always preserve meaning. For him one reason is evident: 'Greeks are Greeks' and 'Greeks are Hellenes' are not used alike, with the latter used to explain the meaning of its terms, more like ' 'Greeks' means Hellenes'.

Wertheimer points out that Putnam in an early 1954 paper had provided the first lines of argument for interception nonsynonymy: Logical form has semantic content, and interceptions lack the truth securing syntax of logical sentences, e.g., tautologies. Wertheimer views the criticisms of that paper as misdirected toward intensional conundra whereas Putnam was emphasizing logical syntax per se. For Wertheimer, the final irony here is that Putnam himself in a 1981 paper renounced interception nonsynonymy. With such a to and fro at the intersection of many issues Wertheimer regards interception nonsynonymy is an important, empirically evident phenomenon that must be understood.

Wertheimer proceeds to approach interception nonsynonymy from different perspectives and continues to play off of Putnam's two papers. Wertheimer then confronts an important defense of interception synonymy, that intercepting substitution does preserve meaning, made by Alonzo Church. Church, also in 1954, suggested that a good way to test whether a sentence is about some linguistic expression, or rather about something that the expression is used to mean, is to translate the sentence into a foreign language. For example, revealing might be a translation of ' 'Blood is red' says blood is red' into German as ' 'Blood is red' heißt daß Blut ist rot'. However, Wertheimer argues that Church's arguments with his test are at base circular and proceeds to disentangle various elements in an extended exegesis. (Incidentally, Quine, with his well-known thesis of the "indeterminacy of translation," would also not subscribe to Church's test.) Wertheimer then proceeds to questions of truth and modality, pointing out further inadequacies with Church's test; while logical truths are true by syntax, interceptions alter syntax and modality.

George Wilson takes as his starting point Putnam's "The Meaning of 'Meaning' " and within its realist context entertains temporal ambiguity of extensional reference for a general term. Focusing on a specific example, Putnam would endorse (a): The extension of 'gold' as the term is used now is the same as the extension of 'gold' as the term was used in 1650. To elaborate, the extension of 'gold' is the collection of objects of which 'is gold' is true of; 'is gold' as used now is taken to be true of an object exactly when it is composed of the element with atomic number 79; and 1650 is before the advent of molecular chemistry. Wilson however argues for the viability of (b): It is not the case that the use of 'gold' in 1650 determined that 'is gold' was true of an object exactly when it is gold (i.e., is composed of the element with atomic number 79). Wilson presents an argument of the type advanced by Gary Ebbs: Platinum, with atomic number 78, was chemically indistinguishable from gold in 1650; later, it can be maintained that any earlier ascription of 'gold' to platinum was mistaken or, acceding to previous usage, that platinum could be acknowledged as a kind of gold.

How are (a) and (b) to be reconciled? Wilson argues with another heuristic example about emerging standards that (a) and (b) are fully and intelligibly compatible, and that it is the implication, if (a) then not (b), which is false. A standard was settled upon for 'gold' some time after 1650, and "we apply it to our own uses of the word and, retroactively, to the legitimate precursors of those present uses." In Putnam's phrase there is a "division of linguistic labor" in linguistic practice with experts setting the standards for use of general terms; Wilson expands this notion over time. Varying extension also plays a role in the next paper, but for a reason other than ambiguity.

Terry Horgan confronts the Sorites Paradox, regarding it as having profound implications for metaphysics, logic, and semantics, and describes a general approach to the issues it raises. A paradox is an apparently unacceptable conclusion drawn from apparently acceptable premises by apparently acceptable reasoning; paradoxes have long played pivotal roles in philosophy because of the issues that they have elicited about meaning and truth. The Sorites Paradox addresses the vagueness of predicates that admit numerical calibration. In a rendition of Horgan's, let B(n) abbreviate 'A man with n hairs on his head is bald'. Then B(0); for any n, B(n) implies B(n+1); and hence B(1017). Horgan observes that, taking the conclusion to be false, reasoning in standard two-valued logic leads to the existence of an n such that B(n) & ~B(n+1). He rejects the "epistemic" position according to which there actually is such a transition point but it "is unknowable to finite minds like ours," so to him some kind of repudiation of standard two-valued logic is necessary for an adequate analysis.

Horgan next argues that there are two broad metaphysical approaches to vagueness depending on whether one affirms or denies ontological vagueness, vagueness in the mind-independent, discourse-independent world. The first approach posits genuine objects and properties that are vague, and the second approach takes vagueness to be a matter of language and of thought content. Horgan favors the latter approach and the treatment of truth as indirect correspondence between vague language and non-vague reality. One such treatment is "supervaluationism," according to which there could be many permissible interpretations that make precise the references for the vocabulary, and to be true is formulated as holding in all permissible interpretations. Horgan himself advocates another treatment, initially called by him "language-game" semantics, which construes truth as "semantically correct assertibility" under contextually operative semantic standards.

Horgan's argument for the impossibility of ontological vagueness serves as the entré into his general approach to vagueness. He takes as an essential attribute of vagueness "boundarylessness," which for a predicate like 'is bald' emphasizes that there is no determinate fact of the matter about the transition from true statements B(n) to false ones. Horgan considers boundarylessness to have two conceptual poles: an "individualistic" pole that asserts that any adjacent pair B(n) and B(n+1) must have the same semantic status (including indeterminateness); and a "collectivistic" pole that asserts the impossibility of iteratively applying the individualistic-pole requirement and of having a determinate collective assignment of semantic status to all the statements B(n). Horgan concludes that boundarylessness is logically incoherent, in a generic way that does not presuppose any particular logic of vagueness. This for him is "weak" logical incoherence, as opposed to "strong" logical incoherence, the avowal of logically contradictory individual statements like j & ~j. But for Horgan, "the world cannot be logically incoherent, even in the weak way: it cannot have features that are the ontological analogues of mutually unsatisfiable semantic standards." Hence he concludes that there cannot be ontological vagueness.

Horgan's "transvaluationism" is his general approach to vagueness, and it makes two fundamental claims: First, vagueness is weakly logically incoherent but not strongly so; and second, vagueness is "viable, legitimate, and indeed essential in human language and thought." Semantic standards governing vague discourse are "logically disciplined," in that the collectivistic-pole requirements dominate, although they do not defeat, individualistic-pole requirements. One is reminded here of Wittgenstein's open attitude toward formal contradictions. In a "forced march" of "is it true?" queries through a sequence like B(0), . . . , B(1017) Horgan urges us to "adopt a Zen attitude: be tranquilly silent in the face of those persistent queries, in the knowledge that no complete set of answers is semantically correct."

Timothy Williamson's paper, bridging the first swath of papers of this volume with the next, provides a new approach to the semantic paradoxes that, in contradistinction to indexicality analyses, posits changes in linguistic meaning to key terms. A semantic paradox is a paradox that turns on 'true'. The best known of the semantic paradoxes fall under the rubric of the Liar Paradox, one version being to decide whether the speaker of 'What I am now saying is false' is saying something true or false.

Williamson begins by asking: If paradoxes arise from shifts of context for terms like 'true', should this be attributed to their indexicality (change of reference without change of linguistic meaning) or to their ambiguity (change of linguistic meaning)? Stability of linguistic meaning would seem to be the basis of successful communication, and this would seem to favor the indexicality explanation according to which terms like 'true' function like 'I', a term we understand even if uttered by a stranger. However, Williamson takes a critical view of the indexicality approach, particularly of Tyler Burge. In a modern version of approaches taken by Russell and Tarski, Burge resolved the Liar Paradox with a system of levels i and terms 'truei' for each level that do not interact pathologically with each other. While having such a system brings out the indexicality, Williamson argues that a strengthened Liar Paradox using 'true at any level' infects Burge's approach. Williamson then takes David Kaplan's treatment of indexicality and develops a 'context' sensitive version of the Liar Paradox, one that suggests a non-indexicality analysis.

Williamson's proposal is that the semantic paradoxes turn on actual shifts of linguistic meaning for their key terms. He sketches a schematic approach for context-sensitive speech acts based on 'say' and connecting that with 'true' and 'false'. The conceptual roles of 'true' and 'false' are fixed relative to 'say', but in his dynamic view 'say' can undergo small changes of meaning. Williamson argues that his approach is not defeated by a strengthened Liar Paradox because future understanding cannot be anticipated in present meanings. He suggests that even the term 'meaning' is unstable, and concludes by presenting a rather Wittgensteinian picture: Instabilities of meaning preclude any exhaustive treatment of all the semantic paradoxes, and indeed, further reflection uncovers new paradoxes and new solutions.

With the previous paper as an entré, the next several papers discuss theories of truth. All of the previous papers dealt with truth, to the extent that they were engaged with the relationship between language and reality. However, they had remained largely neutral as to the nature of truth: What is it for a sentence (or proposition) to be true? Theories of truth tackle this and like questions, and one can say that to the extent that they expand beyond the logic of truth they become avowedly metaphysical. Aristotle wrote in his Metaphysics, "To say of what is that it is not, or of what is not that it is, is false; and to say of what is that it is, or of what is not that it is not, is true." As with many pronouncements made on truth, this can be taken to be informative or trivial.

Tarski in the early 1930s made the first significant advance in the logic of truth with his "semantical conception" of truth. He first articulated equivalences, of which an instance is: 'snow is white' is true if and only if snow is white. To elaborate, 'snow is white' is a sentence in the object language, the language under study, and the sentence is to be true exactly when snow is white, this entire assertion being made in the meta-language where the concept of truth is being articulated. The schematic form of these equivalences, already seen in some of the earlier papers in this volume, is called T, and Tarski argued that definitions of truth must meet the material adequacy condition, that every instance of his schema T holds.

Tarski next provided a definition of truth for the sentences of any language formalized in first-order logic, assuming his schema T for the basic (atomic) sentences and showing how the schema may be extended to all sentences of the language. His definition brought out the inherent assumption of compositionality for the language, that the meaning (or truth value) of a complex sentence is functionally determined by the meaning (or truth value) of its constituent parts. Moreover, his definition accommodated infinitely many sentences, being a seminal example of what is now known as a recursive definition. The crucial move that he made was first to define satisfaction, a relation between sequences of items in an interpretation (model) for open formulas (formulas with unquantified variables), and then to define truth for sentences (formulas without unquantified variables) in terms of satisfaction. Carnap and Quine both lauded Tarski's work, the former regarding it as having legitimized the notion of truth by reducing it to unproblematic notions and the latter emphasizing how the truth predicate as embodied by schema T is "a device of disquotation" for passing from words quoted to words used.

Tarski's analysis of truth in terms of satisfaction serves as the basis of the mathematical field of model theory, the generalization of abstract algebra incorporating formal semantics in a set-theoretic environment. Having established his now famous mathematical result on the undefinability of truth in formal languages, Tarski believed that it is hopeless to define truth for natural languages. However, for most philosophers bent on tackling the concept of truth for us and our natural languages, Tarski's analysis only serves as a beginning-if that-depending on how one takes his schema T. Tarski's work nevertheless established a framework of discussion for succeeding work on truth, as illustrated by the four papers here on theories of truth.

Dan Goldstick explores two commitments of what he terms the 'Correspondence Theory of Truth'. Correspondence theories of truth analyze truth in terms of a bifurcation into propositions (or sentences) and reality (or facts), and an interconnecting correlation. Such theories are longstanding in various versions, with Tarski's "semantical conception" arguably one. Goldstick begins by asking what more there is to the Correspondence Theory than Aristotle's dictum and Tarski's disquotational equivalence. Goldstick dismisses logical atomism, according to which the truth of "molecular" propositions are to be analyzed in terms of their "atomic" parts. (Elsewhere, he had in fact argued that the sentence 'Water is heavier than ice, and water is water' expresses the same proposition as 'Water is heavier than ice'. This precludes a definition of truth à la Tarski by recursion, although compositionality without reduction to atoms is not necessarily forestalled.) Goldstick then discusses the two commitments of the theory: the distinctness of the existence of a true belief from the existence of the fact believed (including the denial that one in general logically entails the other), and an isomorphism between the two. This articulation of correspondence in terms of beliefs, distinctness, and structural similarity has much in common with Russell's pre-Tarskian articulation of correspondence in The Problems of Philosophy (1912).

Goldstick then raises the following concern: Just as there must be an isomorphism between true beliefs and the facts of the matter, a parallel case can be made for an isomorphism between false beliefs and the facts of matter. Recalling Wittgenstein's treatment of negation in his Tractatus (4.061-6.0621) Goldstick, having rejected logical atomism, confronts the question of how false beliefs can be said to have a correspondence to facts as well. This leads him to conclude that beyond correspondence there must be more to the nature of truth, in the direction of the differentiation of facts, a direction he intends to pursue.

Lorenz Puntel offers a new explanation of the concept of truth, one directed at answering the question that serves as his title: What does '. . . is true' ('it is true that . . .') express? He first declares that his explanation will be deliberately restricted to particular sentences (expressing propositions) at the basic level of 'snow is white', arguing that the basic case has not been adequately treated. Puntel also argues against well-known arguments, including Tarski's, that truth is inexpressible (undefinable). Puntel then stresses that his title question is but a special case of the more general question, what does this sort of "semantic vocabulary" express? He describes how, starting with language as primordially a system of symbols, "the function of semantic vocabulary is to make language fully determinate 'from within language itself'."

Puntel then reviews R. Brandom's "prosentence" theory of truth. According to the original prosentence theory of truth, just as the pronoun 'he' in 'Tom did as he was told' functions "anaphorically" by referring to the antecedent noun 'Tom', 'it is true' and 'that is true' function anaphorically by referring to antecedent sentences. For Brandom 'is true' is a prosentence-forming operator which when applied to a term (like 'it' or 'that') for a sentence tokening yields a prosentence with that tokening as anaphoric antecedent.

Acknowledging this as an inspiration, Puntel proceeds to describe his new theory. Like Brandom's it is a "deflationary" theory of truth in that it does not regard 'is true' as a genuine predicate, and again, it is taken as a syntactic operator. However, moving forward rather than backward with his concept of semantic vocabulary, Puntel regards 'is true' as a "cataphoric" operator, one which when applied to sentences of "underdeterminate" status yields sentences of fully determinate status. In this explanation Puntel shifts the Fregean proposition analysis by regarding his operator not as 'it-is-true' applied to that-clauses expressing propositions, but as 'it-is-true-that' applied to sentences per se. However, there is a resonance of sorts with Schiffer's pleonastic Fregeanism in the something-from-nothing conceptualization of truth.

Puntel further schematizes the notion of truth as a composition of two functions. The first works in essence as a syntactic operator adjoining 'it-is-true-that'. The second assigns fully determinate status to the resulting sentence. In Puntel's theory there is no place for a correspondence relation between sentences and reality, but neither does he regard truth as redundant; "there is simply identity between the proposition (or states of affairs) expressed by a true (that is, a fully determinate) sentence and a fact in the world."

Puntel proceeds to reinterpret Tarski's schema T as a relation between underdeterminate sentences and their fully determinate counterparts. This is actually a radical reinterpretation, since Tarski had meant his schema to express a condition of material adequacy for accounts of the relation between sentences and what they are about, not between sentences and sentences. Intuitively speaking, truth should hinge on reality, not merely on language alone. This raises the main challenge for Puntel's analysis: How is the full determination of sentences to be understood? Having shifted the weight of truth into language, his analysis of semantic vocabulary cannot be the repository for substantial properties of truth. At the end, Puntel mentions that Carnap's old idea of linguistic frameworks should be of fundamental importance in characterizing full determination.

Gabriel Sandu takes a novel approach to theories of truth by addressing the question to what extent the requirements of a minimalist program for defining truth are met by formal languages and their semantics. The "minimalist conception" of truth, as first advanced by Paul Horwich, begins by observing that 'is true', like the notorious 'exists', has the surface grammar of a predicate like 'is white', but as Wittgenstein warned, one must not be misled by this to produce mistaken analogies that then spawn pseudo-problems. What there is to 'is true' can simply be accommodated by using instances of a weakly interpreted version of Tarski's schema in the form: the proposition that p is true if and only if p (T). Although 'is true' may still be regarded as a predicate, what can be properly said about it can thus be separated off from fuller theories of reference, satisfaction, and so forth.

With formal languages in mind, Sandu formulates, in addition to the schema (T) and the analogous one (F) for 'is false', several further requirements of the minimalist project as he sees it. One pivotal desideratum is that truth in a language should be definable in that language. This is to fit with the minimalist view that to understand truth as applied to a language is merely to understand that language, not through use of a special truth predicate. Sandu reviews and compares at length various logics in connection with the definability of truth. In a shift of emphasis from the previous two papers, Sandu then turns to the issue of compositionality, whether the meaning of a sentence should be functionally determined by the meaning of its proper subformulas. Here, the concerns have to do both with the minimalist conception as well as technical results involving recursively generated languages. Sandu navigates his way to a final requirement that the language not be compositional.

Having established the minimalist requirements for a formal language, Sandu proceeds to discuss independence-friendly logic (IF-logic), developed jointly by him and Jaakko Hintikka, and to show how far this logic meets the requirements. IF-logic is an expansion of first-order logic that features a new item of notation (x/Y), where x can be a variable or a logical connective and Y a variable or a set of variables, indicating the independence of x from Y. For example, in the formula "x$y"z$(w/x)Rxyzw, w is to be exempted from being in the scope of the quantifier "x. The use of (x/Y) for x a quantifier is equivalent to the use of Leon Henkin's "branching quantifiers." IF-logic has a natural "game-theoretic" semantics according to which "x$y"z$(w/x) Rxyzw is satisfied under an interpretation exactly when a player in a certain formally presented game has a winning strategy, and there is no corresponding semantics for, e.g., the subformula $(w/x)Rxyzw. With this lack of compositionality for sentences IF-logic meets Sandu's final requirement. IF-logic has other interesting properties, most notably that although a Liar sentence can be stated, a truth predicate for (first-order) IF-logic is definable in IF-logic. The one requirement of Sandu's that IF-logic fails to meet, consequently, is that the schema (F) for falsity be satisfied. Thus, IF-logic is a formal system that meets a range of minimalist truth criteria but abandons classical (contradictory) negation, albeit in a subtle way. Sandu uses IF-logic to argue for the logical consistency of a program for defining truth; in the next paper his collaborator argues from the vantage point that takes IF-logic as "our true basic logic."

Jaakko Hintikka takes a critical look at theories of truth and what has held them back, and with unabashed advocacy argues that the development of IF-logic opens up new prospects for being able to define truth for our actual, natural languages. Supporting at least a pre-theoretic correspondence view of truth, Hintikka also starts with Aristotle's dictum and Tarski's definition of truth for formal languages. However, Hintikka diagnoses Tarski's result on the undefinability of truth as "merely a flaw in ordinary first-order logic," one easily corrected by introducing variable independence. The result is IF-logic, "our true basic logic," a logic within which a truth predicate can be defined. This definability is not surprising when one is made aware of the fact that IF-logic is equivalent to the S11 fragment of second-order logic, since a truth predicate for this fragment is definable by a formula in the fragment. Indeed, the proof that IF-logic has a definable truth predicate proceeds through S11. However, the thrust of Hintikka's work is to initiate a basic shift of what our underlying logic is, and he takes the properties of IF-logic as fundamental.

Indeed, Hintikka advocates the specific form of the truth definition for IF-logic, one formulated through semantic games of verification and falsification. He considers these games to be "constitutive of the concept of truth." According to him however, because of "a mistake of the Wittgensteinians and of the constructivists" this approach will not satisfy them; they confuse the language games of natural language that serve to define what it means for a sentence to be true with those structured "language games" like his that enable us to come to know the truth of a sentence.

Hintikka concludes by asserting that "an objective correspondence notion of truth is imbedded in our own language. If we understand that language, we understand the notion of truth." Taking IF-logic as the basis Hintikka, like Sandu, brings out what a commitment to that logic would entail: First, one gives up "the wild-goose chase of compositionality"; second, one gives up classical negation, at least for the full logic; third, axiomatic set theory becomes "useless as a vehicle for theoretically satisfactory truth definitions." These amount to radical departures from the usual context provided by first-order logic, departures more fully explored and argued for by Hintikka in his book The Principles of Mathematics Revisited (1996).

Oswaldo Chateaubriand's paper, related by its iconoclasm and expansiveness to the previous one, addresses the issue of what logical forms really are. This issue is, of course, as old as the study of logic itself, with Aristotle himself having raised the two issues of what is to count as logical, that having to do with reasoning alone, and what are the specific forms of the logical, its regularities of pattern. Chateaubriand first describes "the linguistic view of logic," in which various features of natural languages are specified as logical and rules of formula formation developed, and emphasizes how the language-independent universality of logical notions derives from their interpretations in terms of truth and falsity. However, whereas formal languages quickly become the focus in standard treatments of logic, Chateaubriand continues to locate logical forms in natural language and insists on the inadequacy of any one formalization for capturing them. His approach is ambitious and radical in abandoning the confines of first-order logic and aspiring to capture higher forms like Henkin's branching quantifiers, central to the Hintikka-Sandu IF-logic, and much more. At the end Chateaubriand sketches a view of logical forms as "independent of notational systems but [forms] that can be correlated with notational systems" as "logical properties and relations in a typed hierarchy."

Chateaubriand through much of the paper takes particular aim at Quine's defense of the linguistic view of logic in his The Philosophy of Logic (1970). Quine's approach is modest and pragmatic in espousing first-order logic as adequate for science. He evidently regards the development of logic as a working out in mediis rebus of certain syntactic constructions, notions of grammar, and notions of truth. Chateaubriand takes Quine to task for what are viewed as shortcomings of his view in capturing the possibilities of natural language. Chateaubriand argues that Quine makes problematic suppositions about predication, suppositions involving a crippling circularity. This broaches what was once called the logo-centric predicament by H. M. Sheffer, the truism that logic cannot be developed without assuming some logic. Chateaubriand argues that there are enough gaps in Quine's linguistic view to make it incoherent.

The next several papers have to do with the philosophical question, central to the analytic tradition, of the nature of mathematics. Stewart Shapiro addresses the question, "Is set theory the right foundation for mathematics?" Frege and Russell both worked to reduce the concept of number to logic, this being the main thrust of their logicism. Their efforts are now widely regarded as having failed, because of vitiating circularities inherent in their universalist conception of logic or at the very least because of their reliance on concepts no longer considered logical but distinctly set-theoretic. The reduction of arithmetical notions to sets and membership still stands, however, and this is the basis for the question addressed by Shapiro, with set theory taken to be the standard ZFC theory, perhaps augmented with large cardinal axioms, as formalized in first-order logic.

Shapiro first rehearses two inter-related motivational senses of 'foundation' for mathematics: the ontological, having to do with what mathematics is ultimately about, and the epistemic, having to do with how mathematics is ultimately justified. He finds traditional accounts wanting, and proceeds to describe and advocate an approach to set-theoretic foundationalism recently proposed by Penelope Maddy, developing ideas of Yiannis Moschovakis. Maddy observes that the traditional structures of mathematics find faithful representations, "surrogates," in the set-theoretical universe and points out that once the various fields of mathematics are recast in this one arena of set theory with its explicit axioms, there are new clarifications and interactions. These mathematical benefits Maddy deems sufficient for a foundation. Shapiro proceeds to ask what philosophical-that is ontological and epistemic-ramifications there might be of this position. His discussion is wide-ranging, raising various traditional concerns but also suggesting how set-theoretic foundations are able to meet them, at least if they are construed modestly. Shapiro explains how the Maddy-Moschovakis account provides grist for the mill of structuralism, the position that the nature of mathematical entities is given by their structural relations to each other.

The Maddy-Shapiro position is not an unfamiliar one, especially for working set theorists. In my view there is a striking aspect of set-theoretic foundations that should be emphasized: When a field of mathematics is recast and formalized in set theory, the range of quantifiers becomes the entire set-theoretic universe. As Quine famously wrote, "To be is to be the value of a variable." For an abstract field like topology this leads to new examples and counterexamples involving the transfinite, examples that those working in the field may be loath to entertain. The field has actually been transformed as a mathematical enterprise, creating new tensions and dynamics beyond superfluous set-theoretic properties of surrogates and involving new questions of what the field is to admit into its study.

Bob Hale, as part of a neo-Fregean logicist program, sketches a formulation of the real numbers in terms of Fregean abstraction principles. Although much has been made of Frege's philosophy of language as we have seen, Frege's larger efforts were directed at the foundations of mathematics. In the first volume of his master work, Grundgesetze der Arithmetik (1893), Frege was led to associate a concept (function) with its Wertverlauf (value-range), the result of unrestricted saturation of the concept by all possibilities of objects. Frege then identified functions through their Wertverläufe by means of his Basic Law V. However, this fateful identification of extensions by way of intensions would engender the inconsistency famously brought out by Russell's paradox and lead to Frege's abandonment of his reductionist program.

In the Grundgesetze Frege used his Basic Law V to derive what is now known as Hume's Principle: the number of Fs is the same as the number of Gs exactly when the Fs can be correlated in one-to-one fashion with the Gs. Although Basic Law V itself is inconsistent, George Boolos and others showed in the 1980s that Hume's Principle is consistent relative to second-order arithmetic, and Richard Heck showed that in the Grundgesetze Frege actually derives his second-order axioms for arithmetic, equivalent to the Dedekind-Peano axioms, just from Hume's Principle.

Hale supports the neo-Fregean logicism of Crispin Wright according to which Hume's Principle is itself to be regarded as a truth of logic. But rather than merely defending this view Hale proposes to extend it to encompass the real numbers, in the spirit of Frege's own, incomplete treatment of the reals in the Grundgesetze. By a Fregean abstraction principle is meant, in mathematical terms, a criterion for identifying the reference of certain singular terms (proper names) via equivalence relations. Using such principles Hale recasts a well-known mathematical procedure for generating the real numbers from the integers, one that first develops ratios of integers and then proceeds to the ratios' "cuts" à la Dedekind. Hale deftly avoids certain technical problems by developing the positive real numbers first. For the move to ratios of integers he acknowledges the original antecedent in Eudoxos's theory of geometrical proportions as described in Book V of Euclid's Elements, and there is an affinity of sorts in the numbers there being measurements of geometric quantities and Frege's insistence on the reals as ratios of quantities. There is also a resonance with Dedekind's having considered arithmetic as a part of logic-although Dedekind's cut procedure and his contention that numbers are "free creations of the human mind" did not gain Frege's approbation. Though Hale emphasizes a minimal reliance on set theory, it is there in the (dis)guise of second-order logic. More pointedly, he uses set-theoretic arguments about infinite cardinals to bolster his abstraction principles and set them apart from the inconsistent Basic Law V. By incorporating such arguments this neo-Fregean logicism arguably goes significantly beyond Frege's conception of what is logical.

The next two papers provide analyses of texts in the traditional philosophical canon in light of modern logic and analytic philosophy. Donald Baxter takes passages on the perception of time from Hume's A Treatise of Human Nature, points out an apparent antinomy (or paradox), and proceeds to resolve it by building an algebraic system that Baxter argues consistently reflects Hume's ideas. On Baxter's gloss, Hume argues that the idea of duration is the idea of successiveness, of several things arrayed in succession; that all and only successions have duration; and that there is such a thing as a steadfast (Hume: unchangeable) object, something that co-exists with many things in succession but is not itself a succession. The antinomy is then that a steadfast object lacks duration because it is not a succession, but has duration because it co-exists with something that has duration.

Baxter resolves the antinomy by considering how moments of time can be arrayed so that, with some moments being abstractions of steadfast objects, Hume's ideas can be consistently cast in first-order logic. Baxter first defines 'later than' to be a strict (asymmetric and transitive) ordering of moments. He next defines 'co-exists with' by specifying that one moment co-exists with another exactly when neither is later than the other. He then formulates two further axioms: There is at least one moment that some successive moments co-exist with; and if one moment co-exists with another, then any moment later than one is later than any moment the other is later than. With 'succession' to be a linearly ordered set of moments, Baxter shows through several consequences of his axioms that there is a loose coordination of different successions, despite the lack an equivalence relation of direct simultaneity.

Once provided with an algebraic system a mathematician, like this writer, is tempted to explore it further. Let me point out that a weak simultaneity relation can be defined by: a moment x weakly co-exists with a moment y exactly when there is a sequence x = x1, x2, . . . , xn = y such that for each i < n, xi co-exists with xi+1. This weak simultaneity is an equivalence relation on moments and is moreover a congruence relation for 'later than', i.e., 'later than' is well-defined for equivalence classes. Thus, time is pictured as a sequence of equivalence classes, each consisting of coordinated successions of linearly ordered moments.

Baxter argues for his reconstruction of Hume's ideas by appealing to various passages from the Treatise and, more broadly, to his reconstruction's ability to justify two deeply held common-sense convictions about time: time is unlike space, and time flows. Baxter's algebraic system is thus both a mathematically and philosophically interesting rendering of the perception of time.

Luciano Floridi, in a conspicuously long paper, scrutinizes the central argument of mathematical scepticism in Descartes' Meditations with the modern sensibilities of analytic philosophy. After using a passage from Molière's Don Juan to introduce the leitmotiv of the mathematical atheist, an atheist who nonetheless adheres to the certainty of mathematics, Floridi proceeds to analyze the project of the Meditations systematically and schematically. He lays out Descartes' Method of Doubt and charts out the "sceptical escalation," running through the Fallibilist Argument and then finally confronting the ultimate specter of the Malicious Demon (better in French: Mauvais Génie), a demon who might deceive us about even the most transparent mathematical truths like 2 + 3 = 5. This raises the ultimate problem of God's possible maliciousness in his omnipotence. Floridi provides a formal analysis of the Malicious Demon Argument in terms of modern modal logic. He then analyzes various approaches to the argument, including one that would be taken by the mathematical atheist. Confronting the Cartesian circle, Floridi sets out Descartes' arguments against the mathematical atheist and for the warrant of a metaphysical belief in the existence of God. Floridi argues that the mathematical atheist has only a limited range of possible replies, a range reflecting the approaches taken to the foundations of mathematics during the emergence of analytic philosophy. Floridi concludes with an appendix that contributes to a debate about Descartes' voluntarism, the view that God in his omnipotence can even make necessary truths false if he so wished. The entire analysis is inclusive and wide-ranging, peppered with references to Wittgensteinian rule-following, extensionality, impredicative definitions, Gödel's work, and even mention of the correspondence theory of truth.

Daniel Andler does not attend to any particular project of analytic philosophy, but rather takes on the issue of how to characterize it as a whole. Andler takes a decidedly post-modern, ideological approach, regarding analytic philosophy as more of a cause or a movement, somewhat embattled and in need of defense and renewal as other philosophies gather at the gate. One often hears that analytic philosophy holds sway mainly in contemporary Anglo-American philosophy, and from his vantage point in Europe Andler does stand closer to the front lines, particularly confronting "continental" philosophies. With broad strokes but also specific details, Andler paints a picture of a contentious academy, of the to and fro of politics and ideas, of banners and slogans, of cabbages and kings.

Andler first focuses on the undefinability of analytic philosophy. The initial irony here is, of course, that undefinability has a strict mathematical sense since Tarski's result on the undefinability of truth in formal languages. Andler describes several possible approaches to definition and directs his efforts against a possible intensional definition, a definition in terms of normative principles about subject matter and procedure. To this end he sets up the Specificity Thesis as encapsulating the intensional and value-laden advocacy of analytic philosophy, and proceeds to debunk its various components. Andler argues that efforts to mark off the territory of analytic philosophy in terms of topic, doctrine, method, or norms all have serious problems.

Having systematically set up and undermined various attempts at local and specific definition, Andler proceeds to offer a holistic characterization of analytic philosophy as a mode of organization of philosophical work modeled on scientific inquiry. Analytic philosophy is not a methodology nor it is just a tradition; it combines the best of two worlds, the objectivity and adaptability of Gesellschaft with the warmth and creativity of Gemineschaft. In being more a sheaf of traditions than one tradition, analytic philosophy is a large umbrella with a universal calling, like science as a whole. Andler argues that it is through this view that works of philosophy can be recognized as analytic, examples presumably being the two previous papers in this volume, by Baxter and Floridi (for the former, note an amusing resonance in Andler's footnote 20).

One is reminded here of Wittgenstein's use of the concept of game. Games are not everything, and 'language game' connotes a structured setting, but one is at a loss to define the concept of game. To be sure, games are loosely speaking rule-governed activities, and they often sport a family resemblance. But what more can be said? Some games are played between opponents and others among several, but there are many without any. Some games have a fixed duration, others are open-ended. Andler's conceptualization of analytic philosophy is at base a Wittgensteinian one, minimizing any intensional sense of value and emphasizing the holistic description of organization and practice. Andler concludes by suggesting that analytic philosophy by its universal nature is well poised for increasing interaction with fields outside of philosophy, like cognitive science, and it is this ability that will determine whether analytic philosophy will survive and prosper beyond its historical confines.

It is appropriate to conclude this volume with an essay contributed by the preeminent living analytic philosopher, W. V. Quine. Quine, in typically succinct and elegant style, addresses a central aspect of the problem of other minds: how to account for our meeting of minds, for our being able to linguistically express agreement regarding external events "despite wild dissimilarity of our nerve nets." Quine considers that in his Word and Object (1960) he had "coped lamely" with this issue, and that "the fog lifted" only by the time of his From Stimulus to Science (1995). Steadfastly physicalist, with even the problem stated in terms of 'nerve nets', Quine provides a naturalistic explanation based on the instinct of induction, the tendency to expect any two similar perceptions to be followed respectively by two more perceptions that are in turn similar to each other; the instinct of similarity, the tendency for having some standards of similarity of perceptions; and natural selection, which shapes instinctive standards of similarity and therewith inductive expectations.

For Quine there are three networks at play in the meeting of minds. The first is perceptual similarity, the intersubjective harmony of similarity standards as shaped by natural selection. There is no assumption being made of direct similarity between the perceptions of subjects, nor is the subject presumed to have any notions of such standards. It is perceptual similarity which "weaves the web of language in early childhood." Coming full circle, this recalls Carnap's sole primitive concept in his Der Logische Aufbau der Welt (1928), "partial similarity."

What "weaves the web of our increasingly scientific theory of nature" is the second network, implication, as expressed by the universally quantified conditional "x(Fx ÉGx). This is to accommodate cause and effect, and moreover marks the advent of the bound variable and hence of reification, according to Quine's well-known dictum, "To be is to be the value of a variable." Quine writes that for him, "the universal conditional embodies the very raison d'être or survival value of ontology itself." Not that ontology is all that matters to science, for "What matters for the biological survival value of science is not what ontology it reveals, but what life-supporting and life-threatening events it conditionally predicts, ultimately in observation conditionals."

Turning from concrete to abstract objects, Quine considers that a domain that seems inescapably to call for quantification over abstract objects is that of numbers. "Numbers, for all their abstraction, must accordingly be accounted an integral part of our theory of the world." But in fact "all of classical mathematics can be translated into pure set theory, which in turn is translatable into elementary logic plus a single two-place predicate, that of class membership." Class membership, then, is the third and final network; it is the relation structuring the domain of pure mathematics. And with the weight placed on membership, Quine reaffirms his commitment to extensionalism; two properties can be identified exactly when they have the same extension, i.e., hold of exactly the same objects.

Quine's account of the meeting of minds in this paper is deceptively simple yet magisterial in its sweep. In tackling a focal issue, the account serves to recapitulate his entire philosophy, one remarkably steadfast throughout his life for its naturalism of outlook, pragmatism of approach, and economy of means.


This volume has been dedicated to the memory of Burton Dreben, who passed away on 11 July 1999 at the age of 71. Burt was a very fine philosopher and logician whose particular forte was the incisive analysis of philosophical texts as well as mathematical results. Burt was a great admirer of the mathematics of Kurt Gödel and made important contributions, expanding on the work of Jacques Herbrand, to mathematical logic. Burt was most of all a penetrating interpreter and inspiring expositor of the analytic tradition, and in both life and philosophy a stalwart champion of the ideas and approach of Ludwig Wittgenstein.

Through his long tenure at Harvard University Burt taught a whole generation of philosophers, graduates and post-graduates, many of them prominent in academic departments today. He also educated legions of undergraduates, making philosophy a significant part of their lives. Burt was an especially close colleague and friend of W. V. Quine and John Rawls. Burt held Harvard, his alma mater, to be a great institution and worked indefatigably in high administrative posts to advance its academic stature and to maintain its integrity in times of social upheaval.

Burt's last years were spent at Boston University. He respected the caliber of its graduate students in philosophy, and they consistently held him to be an inspiring teacher. It is here that I got to know him well, first taking his courses and then becoming good friends with both him and the philosopher Juliet Floyd, whom he had recently married. Their knowledge, words, and insights inform these introductory pages, to the extent that they are informed.

What struck one immediately upon encountering Burt the philosopher and historian was his remarkable command of texts and his truly prodigious memory. These extended to all matters of politics and society. Considerable authority thus framed his courses, which were driven by a fervent desire to show us how language works and works on us. For him the major analytic philosophers were very much alive, their interweaving work forming one great dialectic. As I saw best in our collaborative work, Burt was always thinking through new thematic connections; sometimes, I could not or would not follow, and would simply protest, "It's too deep, Burt, too deep!" Beyond all this, as I came to appreciate through our friendship, Burt had an abiding side to him deeply embedded in life. It was a special privilege to see how he held his family and his religion to be of utmost and immediate importance.

For many of us it may be that in our final days we live in fear not of death itself but rather the prospect that nothing more of real significance will happen in our lives. For Burt the tragedy is that there were no beasts in the jungle, that much was happening in his life. He had his sights on more, he was tremendously happy and productive, and all of us would have been better had he been with us longer.

redblue.gif (1042 bytes)

 

LinkTop.gif (1431 bytes)

paideia3.gif (8894 bytes)

Paideia logo design by Janet L. Olson.
All Rights Reserved.

Managing Editor: Stephen Dawson

Page Created: June 9, 2000
Last Modified: June 9, 2000

 

LinkHome.gif (1151 bytes)