Philosophy and Cognitive Science Computational Complexity and Philosophical Dualism Joao Teixeira
|
This paper purports to re-examine the Lucas-Penrose argument against Artificial Intelligence in the light of Complexity Theory. Arguments against strong AI based on some philosophical consequences derived from an interpretation of Gödel's proof have been around for many years since their initial formulation by Lucas (1961) and their recent revival by Penrose (1989,1994). For one thing, Penrose is right in sustaining that mental activity cannot be modeled as a Turing Machine. However, such a view does not have to follow from the uncomputable nature of some human cognitive capabilities such as mathematical intuition. In what follows I intend to show that even if mathematical intuition were mechanizable (as part of a conception of mental activity understood as the realization of an algorithm) the Turing Machine model of the human mind becomes self-refuting. Our contention will start from the notion of transcomputability. Such a notion will allow us to draw a pathway between formal and physical limitations of symbol-based artificial intelligence by bridging up computational complexity and undecidability. Furthermore, linking complexity and undecidability will reveal that functionalism is incompatible with a materialist theory of the mind and that adherents of functionalism have systematically overlooked implementational issues. 1 - The Lucas-Penrose argument Lucas-Penrose argument runs as follows: Gödel's incompleteness theorem shows that computational systems are limited in a way that humans are not. In any consistent formal system powerful enough to do a certain sort of arithmetic there will be a true sentence a Gödel sentence (G) that the system cannot prove. Nonetheless, we can see that the Gödel sentence is true, thus we have a capacity that the formal system lacks. G also stands for a number a Gödel number Gp which results from the assignment of a code number to each sentence in the language of P that expresses metamathematical sentences. So, Gödel's incompleteness theorem is proved by finding a sentence Gp which is not provable from P. Turing's Halting Theorem is the computational version of Gödel's incompleteness theorem: both point to the existence of undecidable propositions within formal systems, or propositions whose truth-value is to be established by a human mind (intuitive reasoning or mathematical intuition) external to any formal system. In other words, the Halting Theorem asserts that knowing whether a specific Turing machine will halt or not is a task which cannot be mechanized, i.e., accomplished by any Turing Machine. Furthermore, it has been proved that the Halting Problem is reducible to Hilbert's Tenth Problem : if they were not mutually reducible it would be possible to infer the existence of an algorithm for the Halting Problem whose undecidability has already been demonstrated by using Cantor's diagonalization. Moreover, in a fairly recent paper Davis, Matijasevic and Robinson (1976) proved that Hilbert's Entscheidungsproblem is unsolvable. So, if Hilbert's Tenth Problem cannot be solved, and, since Hilbert's Entscheidungsproblem including its Turing version and Gödel's proof go hand in hand (it can be shown that the undecidability of Hilbert's Tenth Problem is a direct consequence of Gödel's Theorem), undecidability is still a major hindrance to computability and hence a formal, a priori limitation inherent to any mechanically symbol-based artificial system. 2 - Transcomputability Complexity theory is only alluded to in a few passages of Penrose (1990, 1994) but no consequences as to his criticism of Artificial Intelligence are extracted from such passages. So, what does Complexity theory tell us? Given a well-defined class of mathematical problems for which there does exist an algorithm for solving them, and for which the "size" of the problem is characterized by some positive, n , we may ask how long the algorithm takes as a function of n. For instance, the time might be proportional to n, or n3 or n!. There are cases in which the algorithm rapidly becomes useless even for moderately large values of n. In those cases there is an increase in the length of time for the realization of the algorithmic procedure a length proportional to the number of elementary steps which are required by the algorithm. Complexity Theory divides up mathematical problems into two major categories: P-problems and NP-problems, where P stands for "polynomial time" and NP stands for "non-deterministic-polynomial time". To be more precise, let us say that amongst all the problems of some particular size n the greatest number of steps that the algorithm takes is N. As n gets larger and larger, the number of N is likely to get large much more rapidly than n. For instance, N might be approximately proportional to n2 or to n3 or to 2n. Within the category P (Polynomial) there are those problems whose increasing rates of N are, at most, fixed multiples of one of n, n2 n3 ....That is to say that for any P-problem we have N £ K ' nr the numbers K and r being constants (independent of n). This means that N is no larger than some multiple of n raised to some fixed power. For a class of problems in general, we take the measure n of the "size" of the problem to be the total number of binary digits (or bits) needed to specify the free data of the problem of that particular size. For a given n there will be up to 2n different instances of the problem for that given size (because each digit can be one of two possibilities, 0 or 1, and there are n digits in all) and these have to be dealt with by the algorithm in N steps. However, there are classes of problems which are not in P. For example, to compute p2n from the natural number p we should need 2n steps just to write down the solution. In this case, any algorithm becomes inefficient as long as time-length is concerned. Practical examples of NP problems are what is called Hamiltonian Circuit and The traveling salesman problem (they are in fact, quite similar). Given a set of towns that are to be visited by the traveling salesman one faces up the problem of calculating the simplest and shortest route he/she will take in order to avoid the necessity of passing through a town twice or even more times. If the number of towns increases to a figure greater than 100 we are likely to face combinatorial explosion and a situation in which an algorithm becomes inefficient. In fact the Hamiltonian circuit and the traveling salesman belong to a special class of NP-problems called NP-complete, i.e., NP problems which can be written down and for which there is a solution as well as a checking procedure for the solution a checking procedure which can be easily achieved in polynomial time. Nevertheless an algorithmic efficient solution for both the Hamiltonian Circuit or the Traveling Salesman Problem could not be found so far. If one could find such a solution one would also be proving that P = NP and that intractable problems can be reduced to tractable ones. But it is believed by experts in the area that P and NP are not the same. Up to now the proposition P = NP could not be proved. But it could not be disproved either. ( Most likely this is an undecidable proposition but this is yet to be shown). Of course, one could plausibly argue that the major problems arisen by Complexity Theory, i.e., the size of N (number of steps to solve a problem) and the speed of computing might depend on the type of machine on which the algorithm is to be run. One may even think that further improvements of hardware could lead to a considerable decrease in terms of the time-length involved in computing and therefore that the efficiency in solving NP problems could gradually be overcome. Apparently we would be facing a practical or technological problem which would, in principle, have nothing to do with any kind of a priori physical limitation for computing machines. Nonetheless, early writings on Complexity Theory by Bremermann (1977) which have been systematically overlooked by many authors (including Penrose) show that there are physical constraints for the design of computing machines and such constraints have a bearing on the time-length consumed by those machines no matter how improved their hardware may be. According to Bremermann there exists a fundamental limit for the speed and efficiency of computing machines which cannot be overcome. Such a fundamental limit stems from the idea that the maximum speed of signal traveling between the inner components of the computing machine is constrained by the speed of light, i.e., 3.108 m/second. The time-lag of signal traveling is determined by the distance between the computing machine inner devices, a time-lag which is in turn constrained by the so-called commutation time. Commutation time is the time-interval involved in processing information (signals) through discrete devices. If Tc: commutation time and Tt: time of signal traveling, it follows that Tc > Tt ³ distance/ speed of light. Therefore, the distance between the components of the computing machinery is constrained by Tc ' 3.108 m/second ³ distance. Even if we supposed the (technological) possibility of building a quite small computer and minimizing/optimizing the trajectory of signal-traveling, such a fundamental limit cannot be overcome at the cost of having to throw away all the knowledge provided by contemporary physical theory including quantum theory. So viewed, the fundamental limit constitutes an a priori physical limitation for the construction of any kind of computing device. The technological possibility of building such a small ideal computing-machine whose signal-traveling speed approximates (and only approximates) to the speed of light cannot be discarded as a future accomplishment of hardware design. Nevertheless, even with such powerful hardware there would remain problems whose complexity can be said to be transcomputable. A transcomputable problem is a NP-problem or a NP-complete problem whose algorithmic solving procedure cannot be achieved in efficient/polynomial time no matter how improved the hardware of the computing machine may be (it is demonstrable that Complexity Theory and that Bremermann's fundamental limit encompass parallel computation and even quantum computation). Since the growth of temporal complexity involved in the realization of transcomputable algorithms is exponential that means that the time-length required for running some transcomputable algorithms can be as long as the age of the universe. Furthermore, it should also be noticed that the exponential temporal complexity required for the realization of transcomputable algorithms is also applicable to human brains, provided that they are also physical systems and hence subject to Bremermann's fundamental limit, at least in so far as neuronal information processing cannot occur at a speed faster than the light. 3 - Complexity and undecidability The common feature shared by undecidable problems and transcomputable problems is their assumed uncomputability. Since the Halting Theorem does not allow us to identify a priori when the uncomputability results from undecidability or from the apparently non-terminating transcomputability of certain problems, I propose a relationship between computational complexity and undecidability on the basis of the following conjecture: C1 - Some undecidable propositions may be transcomputable C1 means that, unless for propositions whose undecidability has already been demonstrated, one cannot ascertain whether or not the computation of a Gödel sentence will ever terminate or not. This is a direct consequence of the Halting Problem, as well as of the unsolvability of Hilbert's Entscheidungsproblem: there cannot be such a thing as a mechanical procedure to show whether or not there exists an effective procedure to solve a problem, i.e., any mechanical means to recognize unrecursiveness would presuppose the existence of an algorithmic solution to a given problem. In other words, we cannot know, in advance whether there is or there is not an algorithm to show that N cannot be generated by some Turing Machine. Since we cannot know if a possible algorithmic solution to a supposed undecidable proposition will run through or not there may exist propositions whose alleged undecidability emerges from the intractable, transcomputable nature of their tentative/possible algorithmic solution. This is what follows from the Halting Theorem if the latter is properly interpreted in the light of its equivalence to Hilbert's Tenth Problem.: no one can distinguish between undecidable or pseudo-undecidable proposition on the basis of the non-stopping character of its possible algorithmic solution. So viewed, the Halting Theorem reveals that the non-stopping character of certain computations is nothing but a supposition whose grounds are not entirely formal but the result of a cognitive constraint. 4 - Philosophical Consequences Nonetheless, we can see the truth or falsity of Gödel sentences and, possibly, of pseudo-undecidable propositions. How is this possible? What is leaped is not the possibility of grasping the truth-value of such propositions, but their proof a proof which would ultimately allow their mechanization. This is what is called mathematical intuition: a de re apprehension of mental objects whose properties are public and reproducible; an apprehension which includes the grasping of the truth-value of undecidable propositions and upgrades mathematical knowledge to something better than sheer guessing. It is the belief not only in the possibility but also in the reliability of such apprehension which has constituted the drive of mathematical investigation alongside its history. So viewed, mathematical intuition is the condition of possibility of mathematical knowledge itself, a given assumption which underlies the very possibility of ascertaining the truth-value of the axioms which constitute the core-basis of any formal system and recognizing when a proof has indeed been accomplished. The philosophical trouble posed by the existence of mathematical intuition does not lie in the possibility of grasping the truth-value of Gödel-sentence but in grasping the truth-value of pseudo Gödel sentences. How is it possible that we can overcome a transcomputable algorithm? The plausibility of C1 seems to force on us a dualist view of the nature of the mental - a dualist view which emerges precisely from following up the consequences of assuming the possibility of a mechanical model of mental activity. Let us consider the following line of reasoning: (1) - If C1 is plausible i.e., there is at least one undecidable proposition which is in fact a transcomputable problem (2) - If there is such a thing as the grasping of the truth value of pseudo Gödel sentences (3) - If the human brain is a physical system subject to the laws of physics as well as to Bremermann's fundamental limit (as far as information processing cannot occur at a speed faster than the light), (3) - The grasping of the truth-value of such a pseudo Gödel sentence would not be possible if one conceives of human mental activity as the physical realization of a Turing Machine. The plausibility of the argument stated above follows from C1. Since C1 cannot be proved the existence of at least one seemingly undecidable proposition which is transcomputable can be held by the lack of counterexamples. If there is at least one such a proposition U it does mean that if we assume that the mind works mechanically the speeding up required for deciding the truth-value of U supposedly overcomes Bremermann's fundamental limit. In this case mathematical intuition can be viewed as the immediate apprehension of the result of a transcomputable algorithmic process through such a speeding up, although C1 does not allow us to generalize such a conception to any case where the grasping of the truth-value of Gödel-like propositions obtains. The underlying assumption of such an assertion is that there are at least some mental operations which cannot be reduced or conceived as resulting from the physical activities of the brain. Moreover, the argument points to a dilemma. If one wishes to maintain the main assumption of AI, namely, that human mental activity is the result of the physical activity of the brain conceived as a Turing Machine it is not possible to conflate AI with the laws of physics. Either we give up the Turing-machine model of the mind or we give up the possibility of conflating functionalism with a materialist theory of the mind. In other words, representationalist AI can only be possible if one reinstates a ghost in the machine. But this is tantamount to giving up the wholesale project of AI as a scientific theory of the mental. The oddity of such a pro-dualist conclusion as well as its cogency result from the fact that if there is no reason to endorse a mechanical view of mathematical intuition there is no reason to outrightly discard it. 5 - Conclusion Penrose fails to see the consequences of a relationship between undecidability and computational complexity. Moreover, he fails to see that functionalism becomes self-refuting whenever implementational issues come into play. His view entails dualism, no matter how recalcitrantly admitted. Before closing this paper there are two lines of objection in need of a response. The first consists of maintaining that we need not suppose a speeding-up to overcome transcomputable algorithms: human mental activity would be heuristically based. But why would such a powerful heuristics be used only in certain occasions, i.e., to overcome transcomputable algorithms and not to solve, for instance, the traveling salesman problem? The second, consists in supposing, according to Penrose's view, that the sheer existence of non-deterministic physical processes could account for mental processes involving mathematical intuition without breaking away from a materialist view of the mind. Such a view seems to be wrong, since the existence of non-deterministic, non-computable mental processes cannot warrant the correctness of mathematical knowledge. If mathematical intuition is just guesswork we would not have any reason to suppose that our picture of the physical world is somehow reliable. Nor could we rely on the world's picture provided by quantum physics from which the existence of indeterminacy in natural phenomena emerges.
|
References Bremermann, H. J. (1977) - "Transcomputability and Complexity" in Smith, M. & Duncan, R.(eds) - The Encyclopaedia of Ignorance London: Routledge and Keagan Paul. Davis, M. & Matijasevic, Y. & Robinson, J. (1976) - "Hilbert's Tenth Problem, Diophantine Equations: positive aspects of a negative solution" - Proceedings of Symposia in Pure Mathematics, . 28:323-378 Hopcroft, J. & Ullman, J. (1979) - Introduction to Automata Theory, Languages and Computation New York: Addison Wesley Publishing Company. Gödel, K. (1931/1962) - On formally undecidable propositions of Principia Mathematica and related Systems New York: Basic Books. Lucas, J. R. (1961) - "Minds, machines and Gödel" Philosophy 36:120-124 Penrose, R. (1987) - "Minds, machines and mathematics" in Blakemore and Greenfield, (eds) - Mindwaves: thoughts on intelligence, identity and consciousness Oxford: Basil Blackwell Penrose, R. (1989) - The Emperor's New Mind: concerning computers, minds and the Laws of Physics. Oxford: Oxford University Press. Penrose, R. (1994) - Shadows of the Mind Oxford: Oxford University Press. Turing, A. M. (1936) - "On computable numbers, with an application to the Entscheidungsproblem" - Proceedings of the London Mathematical Society 42:230-65 Turing, A. M. (1939) - "Systems of Logic Based on Ordinals" Proceedings of the London Mathematical Society 45:161-228 |