Revisiting the Turing Test

Humans, Machines, and Phraseology

 

Synopsis

In her Lecture on Criticism, “Revisiting the Turing Test: Humans, Machines, and Phraseology,” Professor Juliet Floyd takes a simple observation as her starting point: “We live in Turing’s world.” There are a few different ways we might understand this claim. On the one hand, there is a tremendous amount of contemporary interest in Alan Turing, with acclaimed films such as The Imitation Game and Ex Machina reflecting the magnetic pull of his life and research in popular culture. On the other hand, we find ourselves in a world shot through with algorithms. What started with the Turing machine has culminated with the thorough entanglement of our lives with computational processes, many of which are beyond our control or understanding. What is the significance of that fact? To take up this question, Professor Floyd directs our attention back to Turing’s own prescient comments on the matter.

Computation and logic, for Turing, are conceptually human activities that do not exist outside of human contexts; in the intuitive starting point he uses, a computational process can only be set in motion once we humans have first articulated a problem that it is meant to solve. Professor Floyd suggests that Turing takes this insight about computation from Wittgenstein’s construal of logic as a kind of “language-game,” in which we (the participants) manipulate symbols in accordance with a step-by-step procedure. Indeed, Floyd reminds us, historically the first “computors” were human beings hired to perform such procedures. In doing so, they calculated “without thought,” according to Turing; playing the particular language-game of computation required them to act as, or to pretend to be, mere constitutive parts of a mechanical process. Turing’s analogy was to a person calculating out the decimal expansion of a real number.

When we look at how people today participate in activities shaped by AI, we might understand that participation as similarly mechanical. Digital “nudging” presents technology users with a constrained set of choices (“Will you watch this advertisement, or leave this website?”) and goes on to treat their choices as data points, simple steps toward drawing useful conclusions for clients. (“80% of users watched your ad.”) In this sense, we can understand surveillance capitalism as rendering human activity algorithmic, with any spontaneous and diverse activity useless for its purposes to be dismissed as “noise.” “This ‘noise,’” claims Professor Floyd, “really matters to the outcomes we will see in terms of culture and society. Democracy itself requires the cacophony of different voices, each speaking his or her mind.” Turing, she goes on to argue, was well aware of this potential tension, as evidenced by his claim that “no democratic mathematical community” ought to allow logic to be made uniform or authoritatively constrained by top-down efforts to standardize it. This would be to misunderstand it as an abstract process rather than a linguistic, social practice.

The Turing Test, too, must be understood as expressing an essentially linguistic, social problem. Professor Floyd emphasizes how the Turing Test takes place within a particular social setting: one person poses questions to another person and a machine, and tries to determine which of the two is “intelligent.” The point of this is not to determine whether or not machines can think, nor whether or not we can know whether or not machines can think; rather, the test is meant to assess whether or not the question “Can a machine think?” is a “grammatical” question, that is, a question articulated clearly enough for us to even know what an answer would look like. We might wonder if the questioner in this scenario even understands what he is meant to be assessing. Moreover, when the test has concluded and the two humans interact face to face, if the questioner has classified the other person as a machine, what will that fact mean to both of them? In other words, as Professor Floyd puts it, “How will they go on together?”

Given our entanglement with AI and other algorithms, this question is both a practical and an urgent one. Consider the Cambridge Analytica scandal, which Floyd examines in the paper from which her lecture is adapted. In this case, the use of algorithms on Facebook targeted certain groups of users with political posts and advertisements in an attempt to change the way they would vote. The company’s approach presented circles of like-minded users with rhetoric that the company’s algorithms predicted would elicit strong responses from them; users profiled as “conscientious,” for example, were shown ads against gay marriage that made reference to dictionary definitions and “law and order.” But by introducing such rhetoric into these targeted circles, Cambridge Analytica did more than shift votes: it encouraged users to uptake this rhetoric as speakers themselves and, further, to form echo chambers that continually reproduce this kind of speech. Turing positions us to recognize features of these cases that demand attention and clarification, including the fact that many users dropped Facebook after learning about the scandal. What can we say about ourselves, all the people involved — that they have been moved by “hidden persuaders,” or even that they have been rendered in some sense “mechanical”? And, just as importantly, what can we say to other people — what will be the nature of our conversations with them, and how will we go on together?

-Caroline Wall, Department of Philosophy, BU, November 2022.

The accompanying lecture slides are available in a PDF via this link.

Professor Floyd at the reception following her lecture.
Colleagues discuss Professor Floyd’s work.
Professor Floyd discusses her work with an undergraduate audience member.

Juliet Floyd, Professor of Philosophy at Boston University, delivered the Fall 2023 talk in the Lectures in Criticism series.