20th World Congress of Philosophy Logo

Philosophy and Cognitive Science

What Makes Something A (Digital) Computer? Why Not Just Any Computational Interpretation Is Sufficient

Robert Stufflebeam
The University of Tulsa
rsstuff@ilstu.edu

bluered.gif (1041 bytes)

ABSTRACT: Turing's analysis of the concept of computation is indisputably the foundation of computationalism, which is, in turn, the foundation of cognitive science. What is disputed is whether computationalism is explanatorily bankrupt. For Turing, all computers are digital computers and something becomes a (digital) computer just in case its 'behavior' is interpreted as implementing, executing, or satisfying some (mathematical) function 'f'. As 'computer' names a nonnatural kind, almost everyone agrees that a computational interpretation of this sort is necessary for something to be a computer. But because everything in the universe satisfies at least one (mathematical) function, it is the sufficiency of such interpretations that is the problem. If, as anticomputationalists are fond of pointing out, computationalists are wedded to the view that a computational interpretation is sufficient for something to be a computer, then everything becomes a digital computer. This not only renders computer-talk vacuous, it strips computationalism of any empirical or explanatory import. My aim is to defend computationalism against charges that it is explanatorily bankrupt. I reexamine several fundamental questions about computers. One effect of this computation-related soul-searching will be a framework within which 'Is the brain a computer?' will be meaningful. Another effect will be a fracture in the supposed link between computationalism and symbolic-digital processing.

bluered.gif (1041 bytes)

If the standard by which to measure the explanatory value of a view were its revolutionary character, then Turing's (1936) analysis of the concept of computation would be highly valued indeed. Whereas the science of mind was once dominated by behaviorists, today it is dominated by computationalists. For computationalists, the mind/brain is a computer. As computationalists came to shoulder the burden for explaining how the mind/brain works, Turing's analysis of what counts as a computer became the standard by which to justify empirical claims about whether something is a computer. According to Turing, all computers are digital computers and something becomes a (digital) computer just in case its "behavior" is interpreted as implementing, executing, or satisfying some (mathematical) function 'f'. Because Turing's analysis is considered the foundation of computationalism, which, in turn, is the foundation of cognitive science, there can be no doubt that Turing's analysis has revolutionized the scientific study of the mind/brain. That much is not in dispute. What is, rather, is whether computationalism is explanatorily bankrupt.

Although attacks against computationalism come in a variety of flavors, what bridles Searle (1990) and other anticomputationalists the most is the sufficiency of Turing's analysis of what counts as a computer. Here is the problem: Because everything in the universe satisfies at least one (mathematical) function, a computational interpretation can be applied to anything (e.g., brains and PCs, but also walls, rocks, and rivers). Because everything is a digital computer, vis-à-vis its purpose to illuminate our understanding of how the mind/brain works, computer-talk is vacuous. And because computationalists are wedded to the view that a computational interpretation is sufficient for something to be a computer, computationalism lacks any empirical or explanatory import. Or so anticomputationalists would have us believe. I see no reason to throw the baby out with the bath water.

My aim for this paper is to defend computationalism against charges that it is explanatorily bankrupt. Toward this end, I shall reexamine several fundamental questions about computers. One effect of this computation-related soul-searching will be a framework within which 'Is the brain a computer?' will be meaningful. Another effect will be a fracture in the supposed link between computationalism and symbolic-digital processing.

1. Preliminaries: What does 'computation' mean?

As is often noted in the literature, the meaning of 'computation' is hard to pin down. This much is clear: 'computation' admits of two senses, one is a mathematical function, the other is a process. In the function sense, a computation is a mathematical abstraction that accounts for a mapping between elements of two classes, usually inputs and outputs of a system. The mapping function is the algorithm or rule specifying what one must do to the first element to get to the second. In the process sense, 'computation' names to the act of implementing, executing, or satisfying some function 'f'. To avoid confusion, I shall use 'function' when talking about the mathematical abstraction or rule. When I use 'computation', it's the process sense that I have in mind. Now on the to task of individuating computers.

2. What is a computer?

We do not find computers "in the wild" because whether something is a computer is not just an empirical matter. Rather, something is a computer always relative to a computational interpretation-a description of a system's behavior in terms of some function 'f'. In other words, something warrants the name 'computer' only after its "behavior" gets interpreted as implementing, executing, or satisfying some mathematical function. Not only is the individuation of computers always interest-relative, this dependence on interpretation manifests itself in more than one way: (1) there is the subjective matter of whether we care about individuating an object as a computer; (2) there is the practical matter of whether doing so does us any explanatory good; and (3) there are certain pragmatic considerations that figure in determining which function (among equivalent functions) is being computed.

Because individuating instances of nonnatural kinds depends on our concept of the kind in question, this is the point where anticomputationalists would label computer-talk vacuous. After all, 'computer' does not name a natural kind. If our concept of computer is such that anything may be called a computer if it receives any computational interpretation, computer talk would be vacuous indeed.

While anticomputationalists are correct to point out that the efficacy of computer-talk crucially depends on our concept of computer, I am going to ignore (for the most part) the issue of whether the concept of computer anticomputationalists attack is the one championed by most self-professed computationalists. Instead, I am going to focus on whether it should be. That is, should computationalists be wedded to the view that any computational interpretation makes something a computer? I think not. Focusing on what counts as a computational system will help explain why.

Like 'computer', whether something warrants the name 'computational system' also depends on an interest-relative computational interpretation. Two senses of 'behavior' underlie the practice of individuating computational systems: one is the inward sense of how a system does what it does-the internal processing; the other is the outward sense of what a system does (e.g., produce speech, fall to the ground, etc.). Consequently, there are two sorts of computational interpretations. A computational interpretation in either sense justifies calling an object 'a computational system'. Thus, there are two general classes of computational systems: O-computational systems and I-computational systems. Anticomputationalists assume that everything deserving of the name 'computational system' is equally deserving of the name 'computer'. That assumption is mistaken.

3. O-Computational systems

When on object's outward behavior receives a computational interpretation, it is what I call an 'O-computational system'. Since "every physical system in the universe" implements some function or other (Dietrich, 1994), computational interpretations of outward behavior are easy to come by. For example, in my left hand, which is about two feet off the floor, I am holding Sophie, my cat. As I release her, she falls to the ground. Because her outward behavior-viz., falling to the ground-satisfies the distance function, D(t) = gt2/2, Sophie is an O-computational system. She might also be an I-computational system (see below).

Don't be lulled into believing that O-computational systems are ubiquitous. After all, even if the outward behavior of "every physical system in the universe" can receive a outward computational interpretation, not every one of them does. Hence, not everything is an O-computational system. As such, even if being an O-computational system licenses us to call it a 'computer', it would not follow that everything is a computer. Of which more presently.

Nor should you be lulled into believing that outward computational interpretations are vacuous. For example, all the major bodies in our solar system move in predictable orbits. In fact, because the outward behavior of each of these bodies has been computationally interpreted in terms of differential equations, which are mathematical functions par excellence, our solar system is a paradigmatic O-computational system.

Admittedly, our solar system is not an intelligent system. Because cognitive scientists aim to explain how intelligent systems work, and the hallmark of intelligent systems is their internal information processing, this aside about a paradigmatic O-computational system might seem beside the point. It is not. The solar system and prototypical intelligent systems-biological minds/brains-both share some important features: (1) their behavior emerges from the tightly coupled interaction with simpler systems; and (2) their behavior can be interpreted in terms of differential equations. As these are the features of dynamical systems, it follows that some intelligent systems are dynamical systems.

Although it might seem obvious that each of us is embedded within an environment rich in other dynamical systems that shape our behavior, using dynamic systems theory (DST) to explain how biological intelligent systems work is very controversial. This is so, in part, because DST forces cognitive scientists to reexamine the practice of wedding intelligent systems to internal computational processing. Aside from being motivated to dispel problematic ontological commitments, DST (among other approaches) aims to reconnect cognitive processing with the world. The issue of re-connection arises because in behaviorism, which was the former received scientific view of cognition, internal processing paled in causal significance when compared to the environment. But in the current received scientific view-computationalism-the environment is all but ignored in favor of internal processing. Cognitivism-the view that the mind is to the brain as a program is to a digital computer-carries this computational solipsism to its logical extreme. But for most proponents of DST, cognition is seen as the product of an agent who is closely coupled with her environment. On this view of intelligent systems, not only does cognitive processing get extended out into the environment, the boundaries for the intelligent system do so as well.

Let me make one final point about computers. Although computationalists and their opponents should limit the ascription of 'computer' to all and only those objects whose inward behavior receives a computational interpretation, they don't. Rather, on the basis of any computational interpretation, 'computer' gets ascribed to anything, even when the computational interpretation is of an object's outward behavior (Searle, 1990; Churchland & Sejnowski, 1992). This practice is problematic because calling something a computer on the basis of an outward computational interpretation is inconsistent with the internal processing that is supposed to be the hallmark of computers. In fact, outward computational stories revel nothing about any object's internal processing. Consequently, given that outward computational interpretations are just that-outward computational interpretations-not all computational explanations (or frameworks) license a commitment to internal representations. Moreover, anything can be an O-computational system, even objects for which there is no relevant or useful inward computational story to tell (e.g., rocks, walls, rivers, etc.). Surely it is problematic to call something a computer if one's only basis for doing so is that its outward behavior gets described as satisfying some function. Thus:

If when individuating a computer one needs to tell a computational story in terms of inward behavior, and

inward behavior is supposed to be detailed computationally in terms of some function satisfied by the inputs, outputs, and the transitions between them,

then because only I-computational systems capture what most computationalists consider to be the hallmark of computers,

the ascription of 'computer' should be limited to only I-computational systems.

If we want to understand one another when we use computer-talk, it makes sense to adopt some convention or other. Here I have sided with the majority. It is of no consequence to my theory that some members of the computational community have chosen not to follow their own advice.

4. I-Computational systems

When an object's inward behavior receives a computational interpretation, it is what I call an 'I-computational system'. An inward computational interpretation is warranted when the system in question has the following features: input states, output states, and transitions between those states. When one specifies the mathematical function that describes what the system must do to the input(s) to get to output(s), one has specified the mapping function 'f'. In so doing, one has rendered an inward computational interpretation.

Inward computational interpretations depend only on whether one is telling a computational story about inputs, outputs, and the transitions between them. Consider, say, the floor beneath you. Floors contain many molecules. In a sufficiently large floor, the movement of these molecules could be described as satisfying just about any function. (Indeed! Because computer programs are algorithms par excellence, your floor could even be described as satisfying the function underlying Word 6.0.1, which is currently running on my Macintosh PC.) So, if we treat the movement of the floor's molecules as inputs, outputs, and transitions between them, your floor can receive an inward computational interpretation. And if it does, your floor would be an I-computational-system. Now consider my Mac. On its hard drive are stored scores of programs. As you might well imagine, inward computational interpretations of my Mac are easy to come by. Unsurprisingly, PCs are paradigmatic I-computational systems.

I have chosen your floor and my Mac as exemplars of I-computational systems because I want to underscore two important but often overlooked "truths" about such systems. First, rendering an inward computational interpretation does not entail that the system in question is actually computing (mechanically following the algorithm). Second, from merely describing an object as an I-computational system, no ontological commitments follow whatsoever (whether about internal representations or anything else). These points seem to be lost on most computationalists. They also seem to be lost on most anticomputationalists. To illustrate, let's take a look at Searle's (1990) challenge.

5. Is everything a computational system? A (digital) computer?

Searle's (1990) challenge to computationalists is this: If Turing's analysis of computation is sufficient, then everything is a computational system. He argues as follows:

1. Every physical system in the universe can receive an inward computational interpretation. (His example: The movement of molecules within a sufficiently large "wall" can be described as implementing, say, the WordStar program on his PC.)

2. Thus, everything is an I-computational system-indeed, "everything is a digital computer" (1990, 26).

3. Digital computers (e.g., PC's) actually compute and anything can be described as a digital computer (e.g., walls, floors, rocks, etc.).

Therefore, Computationalism is explanatorily bankrupt.

This sort of anticomputationalism isn't well founded. Though not all the reasons why need detain us, identifying some of them will help explain the above "truths" about I-computational systems.

First, even if the inward behavior of every physical system in the universe can receive an inward computational interpretation, not every one of them does. Hence, not everything is an I-computational system.

Second, though almost anything can be treated as if it were an I-computational system, it doesn't follow that doing so in the case of just any object (e.g., a wall, a floor, etc.) does us any explanatory good. While all such individuations depend on input-to-output transformations, surely not just any internal story counts as such activity. Case in point, the bare movement of molecules.

Third, descriptions of mere satisfaction are but mere descriptions. While they may be useful, mere descriptions, computational or otherwise, carry little explanatory weight. The tension here lies in the difference between an "as-if" such-and-such and a "real" such-and-such. With regard to computers, it is true that anything can be treated as if it were a computer. Doing so requires only that one render a computational interpretation of its inward behavior. But such descriptions do not entail that the object is a "real" computer. Whatever else it is that makes an object a real computer, surely it has something to do with being a system that actually computes-i.e., its input-to-output transformations mechanically implement some function 'f'. The moral is this: Cognitive science is in the business of explaining how intelligent systems work. Because explanations and only explanations license ontological commitments, the difference between a mere computational interpretation and a mechanistic explanation is a difference that makes a difference when the task becomes fixing the ontology of a given intelligent system.

Of course, mere inward computational interpretations do not apply to only ordinary objects such as walls, floors, and the like. Consider the amount of labor spent in cognitive science attempting to simulate chess playing and other aspects of cognitive processing. Although many programs simulate human chess-playing behavior, it does not follow that the existence of any such program explains how the brain works when humans play the game. Many computationalists-especially cognitivists-appear to believe otherwise. For cognitivists, the mind is to the brain as a program is to a PC. So, understanding how the mind works is just a matter of discovering the right program. And any program that simulates cognitive behavior appears to fill the bill. Enter Searle. When Searle and other anticomputationalists attack computationalism, as their focus is almost invariably on cognitivism, most of their labor goes toward dispelling unlicensed inferences from simulations to explanations. Such is one reason why anticomputationalists are so skeptical about internal representations.

Last, whereas Searle's wall and your floor, at best, merely satisfy their interpreted functions, Searle's PC and my Mac actually compute. The difference between "mere" and "actual" I-computational systems is another difference that makes an ontological difference. Herein lies the problem: How do we distinguish mere I-computational systems from actual (or "real") I-computational systems?

6. What counts as "actual" computation?

Because all computational interpretations are observer relative, the demarcation cannot be made on the basis of whether the computational interpretation is observer relative. But this raises a new worry: If what qualifies as "actual" computation also depends on just an interest-relative computational interpretation, wouldn't instances of "actual" computation be just as arbitrary as instances of "mere" computation?

Not really. In addition to the interest-relative computational interpretation, contemporary orthodoxy maintains that actual computation also requires the rule-governed manipulation of internal symbolic representations (see van Gelder, 1995, 345). Indeed! Actual computation is usually defined in terms of symbolic-digital processing; e.g., (actual) computation =df "a sequence of symbol manipulations that are governed by rules that are sensitive to the internal structure of the symbols manipulated" (Sterelny, 1990). It also happens when one claims that "actual" computation occurs only when the internal states of a system "mirror" the formal structure of an abstract Turing Machine (Chalmers, 1976). Because many computationalists endorse wedding "actual" computation with Turing Machines and their symbolic-digital processing (see also Newell & Simon, 1976; Newell, 1990), little wonder that 'computer' is treated as synonymous with 'digital computer'.

In any event, while wedding actual computation to symbolic-digital processing does eliminate some of the apparent arbitrariness concerning what counts as actual computation, this move has its own problems. For instance, equating actual computation with rule-following symbol manipulation precludes from being a computer the sort of mechanical devices that were formally called 'computers'-analog computers (e.g., Babbage's "difference engines"). Not only does this move beg the question against analog computational devices in general, it begs the question against the leading contender to the symbolic paradigm, namely, connectionism (especially PDP-style connectionist devices) (Churchland & Sejnowski, 1992).

Moreover, what would count as evidence against the view that the brain is a digital computer? Not much, especially if individuating something as a digital computer depends only on our ability to describe the internal goings-on in a symbolic-digital fashion. It is for this reason that extending the notion of 'symbol' to put a symbolic gloss on PDP is so problematic. Not only does this move trivialize what it means to be a symbol, we arbitrarily loose what is distinctive about nonsymbolic-analog processing. We also lose an empirical basis for not treating the brain as a digital computer.

So, if 'actual computation' and 'digital computer' are to do us any explanatory good, there had better be some empirical way of telling whether an object is really a digital computer, just as there had better be some empirical way of telling whether it is actually computing. Should neither be possible, then, independent of the problematic mental ontology that issues from mere computational descriptions, again, there would not be much of a computational basis for believing that internal representations are doing any computational labor in the mind/brain. Similar problems ensue if we treat symbolic-digital processing as computation, but nonsymbolic-analog processing as mere processing (see Dietrich, 1994). Here is what I propose:

There are (at least) two sorts of actual computers: digital computers-devices that actually implement symbolic-digital processing; and analog computers-devices that actually implement nonsymbolic-analog-continuous processing.

An instance of actual computation would be a computational description of an object's inward behavior that maps the physical states of the object onto either (a) actual symbolic-digital-discrete processing or (b) actual nonsymbolic-analog-continuous processing.

The individuation of actual computational processing would be made on both pragmatic and empirical grounds. Regarding the individuation of digital computers for instance, is our best theory of the object such that there are reasons for believing that it manipulates internal symbols in a rule-following fashion? Is there empirical evidence that such-and-such internal state is a symbol, a rule, etc.?

If there is no empirical reason for believing that an object in question actually computes the function attributed to it under a computational description, then, subject to revision as more evidence is accumulated, the computational description entails only that the object satisfies the function.

6. Conclusion

Of the problems facing computationalists raised so far, the most serious is this: If all computers are digital computers and the individuation of a computer depends just on an interest-relative computational interpretation, then not only can anything be a digital computer, computer-talk is rendered vacuous. We can resolve this problem by limiting the scope of 'computer'. Notwithstanding the appearance that this solution is ad hoc, remember, neither 'computer' nor 'computation' are natural kinds. As such, if these terms are to have any usefulness, it cannot be the case that just any computational interpretation warrants labeling an object a computer, digital or otherwise. I have suggested that we reserve 'computer' for all and only those objects whose inward behavior receives a computational interpretation. I then unpack 'inward' in terms of causal state transitions between inputs and outputs. This move addresses the problem for it limits the ascription of 'computer' to objects of sufficient complexity that a computational description at least stands a chance of being warranted, interesting, relevant, and informative. Such isn't the case when just any computational interpretation is considered sufficient; nor is it the case when 'computer' is ascribed to walls, rocks, planets, books, and the like. That isn't to say such objects, by themselves, could never be computers. Since nothing is immune from revision, it could turn out that there "really is" a nontrivial input-to-output computational story to tell about, say, the wall to my right. Nevertheless, I won't hold my breath.

According to the rich ontology of computational systems (and computers) defended here, all computers are computational systems, but not all computational systems are computers. Because whether the brain is a computer is among the central questions driving cognitive science, it is hoped that the above analysis offers a framework within which such questions are meaningful.

bluered.gif (1041 bytes)

References

Chalmers, D. J. (1996). Does a rock implement every finite-state automaton? Synthese, 108 (3), 309-333.

Churchland, P. S., & Sejnowski, T. J. (1992). The computational brain. Cambridge, MA: MIT Press.

Dietrich, E. (1994). Thinking computers and the problem of intentionality. In E. Dietrich (Ed.), Thinking computers and virtual persons: Essays on the intentionality of machines, (pp. 3-34). San Diego, CA: Academic Press.

Hardcastle, V. G. (1995). Computationalism. Synthese, 105 , 303-317.

Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.

Newell, A., & Simon, H. A. (1976). Computer science as empirical enquiry: Symbols and search. In J. Haugeland (Ed.), Mind design, (pp. 35-66). Cambridge, MA: MIT Press, 1981.

Searle, J. R. (1990). Is the brain a digital computer? APA Proceedings, 64 (3), 21-37.

Sterelny, K. (1990). Representation and computation, The representational theory of mind, (pp. 19-41). Cambridge, MA: Basil Blackwell

Turing, A. (1936). On computable numbers with an application to the entscheidungs problem. Proceedings of the London Mathematical Society, 2 (42), 230-265.

bluered.gif (1041 bytes)

 

Back to the Top

20th World Congress of Philosophy Logo

Paideia logo design by Janet L. Olson.
All Rights Reserved

 

Back to the WCP Homepage